[GitHub] indhub closed pull request #8327: update ps lite

2017-10-18 Thread git
indhub closed pull request #8327: update ps lite
URL: https://github.com/apache/incubator-mxnet/pull/8327
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/ps-lite b/ps-lite
index acdb698fa3..bdd4c67e9e 16
--- a/ps-lite
+++ b/ps-lite
@@ -1 +1 @@
-Subproject commit acdb698fa3bb80929ef83bb37c705f025e119b82
+Subproject commit bdd4c67e9e34dc0b8350ce306b0caa737eb31c83


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: update ps lite (#8327)

2017-10-18 Thread indhub
This is an automated email from the ASF dual-hosted git repository.

indhub pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 349ebc3  update ps lite (#8327)
349ebc3 is described below

commit 349ebc36ff3693376287098d280b11069dafbb55
Author: Eric Junyuan Xie 
AuthorDate: Wed Oct 18 01:49:04 2017 -0700

update ps lite (#8327)
---
 ps-lite | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ps-lite b/ps-lite
index acdb698..bdd4c67 16
--- a/ps-lite
+++ b/ps-lite
@@ -1 +1 @@
-Subproject commit acdb698fa3bb80929ef83bb37c705f025e119b82
+Subproject commit bdd4c67e9e34dc0b8350ce306b0caa737eb31c83

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[GitHub] cjolivier01 commented on issue #8340: Fill optimizations

2017-10-18 Thread GitBox
cjolivier01 commented on issue #8340: Fill optimizations
URL: https://github.com/apache/incubator-mxnet/pull/8340#issuecomment-337758247
 
 
   You can re-add it if you need it sometime, although it's better to use 
OpBase::SetToScalar or op_with_req (op_with_req 
override Map() for setting a scalar is in a separate PR) because those properly 
handle Req
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] VikingMew commented on issue #7582: Gluon GPU memory efficiency

2017-10-18 Thread GitBox
VikingMew commented on issue #7582: Gluon GPU memory efficiency
URL: 
https://github.com/apache/incubator-mxnet/issues/7582#issuecomment-337760280
 
 
   @jermainewang How to do (2)?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] javelinjs commented on issue #8297: [scala] Make accuracy idependant of output size (fix #8226)

2017-10-18 Thread GitBox
javelinjs commented on issue #8297: [scala] Make accuracy idependant of output 
size (fix #8226)
URL: https://github.com/apache/incubator-mxnet/pull/8297#issuecomment-337790078
 
 
   We should keep it the same as other language bindings, especially python. 
What if we make it Double in EvalMetric API?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #8343: [CMAKE] Cmake changes, upgrade training test so it converge

2017-10-18 Thread GitBox
tqchen commented on issue #8343: [CMAKE] Cmake changes, upgrade training test 
so it converge
URL: https://github.com/apache/incubator-mxnet/pull/8343#issuecomment-337796868
 
 
   cc @piiswrong 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wa1618i commented on issue #5218: core dumped when I try to compile mxnet0.9.3 with nnpack support WHY?

2017-10-18 Thread GitBox
wa1618i commented on issue #5218: core dumped when I try to compile mxnet0.9.3 
with nnpack support WHY?
URL: 
https://github.com/apache/incubator-mxnet/issues/5218#issuecomment-337805448
 
 
   @szha I have same issue here, please re-open. I can build mxnet from the 
source without error but trying to import mxnet inside python, i get the error:
   
   [12:43:09] /home/lemma/mxnet/dmlc-core/include/dmlc/logging.h:308: 
[12:43:09] src/operator/nnpack/nnpack_util.h:43: nnp_initialize failed status=51
   
   Stack trace returned 10 entries:
   [bt] (0) 
/home/lemma/mxnet/python/mxnet/../../lib/libmxnet.so(_ZN5mxnet2op16NNPACKInitializeC1Ev+0x2fb)
 [0x7f25d2ced66b]
   [bt] (1) /home/lemma/mxnet/python/mxnet/../../lib/libmxnet.so(+0x66d0a6) 
[0x7f25d2a750a6]
   [bt] (2) /lib64/ld-linux-x86-64.so.2(+0x102da) [0x7f25fc9532da]
   [bt] (3) /lib64/ld-linux-x86-64.so.2(+0x103c3) [0x7f25fc9533c3]
   [bt] (4) /lib64/ld-linux-x86-64.so.2(+0x14e00) [0x7f25fc957e00]
   [bt] (5) /lib64/ld-linux-x86-64.so.2(+0x10194) [0x7f25fc953194]
   [bt] (6) /lib64/ld-linux-x86-64.so.2(+0x1454b) [0x7f25fc95754b]
   [bt] (7) /lib/x86_64-linux-gnu/libdl.so.2(+0x102b) [0x7f25fc14502b]
   [bt] (8) /lib64/ld-linux-x86-64.so.2(+0x10194) [0x7f25fc953194]
   [bt] (9) /lib/x86_64-linux-gnu/libdl.so.2(+0x162d) [0x7f25fc14562d]
   
   terminate called after throwing an instance of 'dmlc::Error'
 what():  [12:43:09] src/operator/nnpack/nnpack_util.h:43: nnp_initialize 
failed status=51
   
   Stack trace returned 10 entries:
   [bt] (0) 
/home/lemma/mxnet/python/mxnet/../../lib/libmxnet.so(_ZN5mxnet2op16NNPACKInitializeC1Ev+0x2fb)
 [0x7f25d2ced66b]
   [bt] (1) /home/lemma/mxnet/python/mxnet/../../lib/libmxnet.so(+0x66d0a6) 
[0x7f25d2a750a6]
   [bt] (2) /lib64/ld-linux-x86-64.so.2(+0x102da) [0x7f25fc9532da]
   [bt] (3) /lib64/ld-linux-x86-64.so.2(+0x103c3) [0x7f25fc9533c3]
   [bt] (4) /lib64/ld-linux-x86-64.so.2(+0x14e00) [0x7f25fc957e00]
   [bt] (5) /lib64/ld-linux-x86-64.so.2(+0x10194) [0x7f25fc953194]
   [bt] (6) /lib64/ld-linux-x86-64.so.2(+0x1454b) [0x7f25fc95754b]
   [bt] (7) /lib/x86_64-linux-gnu/libdl.so.2(+0x102b) [0x7f25fc14502b]
   [bt] (8) /lib64/ld-linux-x86-64.so.2(+0x10194) [0x7f25fc953194]
   [bt] (9) /lib/x86_64-linux-gnu/libdl.so.2(+0x162d) [0x7f25fc14562d]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on issue #8337: mx.autograd.grad works or fails depending on use of slices

2017-10-18 Thread GitBox
ZiyueHuang commented on issue #8337: mx.autograd.grad works or fails depending 
on use of slices
URL: 
https://github.com/apache/incubator-mxnet/issues/8337#issuecomment-337792225
 
 
   Is this a bug or wrong usage of autograd? Do you have any idea? @szha 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kpot commented on issue #8337: mx.autograd.grad works or fails depending on use of slices

2017-10-18 Thread GitBox
kpot commented on issue #8337: mx.autograd.grad works or fails depending on use 
of slices
URL: 
https://github.com/apache/incubator-mxnet/issues/8337#issuecomment-337796013
 
 
   @ZiyueHuang Yes, I think the empty line is indeed the argument `''`.
   But when I replace `mx.nd.ones_like(b)` with `mx.nd.ones((1,))` I still get 
the same error. Are you sure that when it worked for you, you actually did use 
slicing?
   
   Just to be on the same page, here's the full code that fails, even though I 
believe it shouldn't:
   ```
   import mxnet as mx
   from mxnet import nd, autograd
   
   ctx = mx.cpu()
   
   a = mx.nd.array([1, 2, 3, 4], ctx=ctx)
   a.attach_grad()
   
   with autograd.record():
   b = nd.sum(2 * (a[0:4] ** 2))   # works without slicing
   
   grads = autograd.grad(b, [a], create_graph=True, retain_graph=True)
   da_sym = autograd.get_symbol(grads[0])
   executor = da_sym.bind(ctx=ctx, args=[nd.ones_like(b), a])
   executor.forward()
   print(executor.outputs[0])
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #5804: Scala build failed with "undefined symbol: __cudaRegisterFatBinary"

2017-10-18 Thread GitBox
tqchen closed issue #5804: Scala build failed with "undefined symbol: 
__cudaRegisterFatBinary"
URL: https://github.com/apache/incubator-mxnet/issues/5804
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #5580: How to create a Custom Operator with extra parameters in Python?

2017-10-18 Thread GitBox
tqchen closed issue #5580: How to create a Custom Operator with extra 
parameters in Python?
URL: https://github.com/apache/incubator-mxnet/issues/5580
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #5566: MinPy next step prototype

2017-10-18 Thread GitBox
tqchen closed issue #5566: MinPy next step prototype
URL: https://github.com/apache/incubator-mxnet/issues/5566
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #5592: [Discussion] CreateBackwardOp interface for `Operator`

2017-10-18 Thread GitBox
tqchen closed issue #5592: [Discussion] CreateBackwardOp interface for 
`Operator`
URL: https://github.com/apache/incubator-mxnet/issues/5592
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #5699: [Discussion] Support Higher-order Gradient

2017-10-18 Thread GitBox
tqchen closed issue #5699: [Discussion] Support Higher-order Gradient
URL: https://github.com/apache/incubator-mxnet/issues/5699
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] goodtogood commented on issue #7771: Error when loading JSON network definition with c++ api

2017-10-18 Thread GitBox
goodtogood commented on issue #7771: Error when loading JSON network definition 
with c++ api
URL: 
https://github.com/apache/incubator-mxnet/issues/7771#issuecomment-337804646
 
 
   same problem.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8340: Fill optimizations

2017-10-18 Thread GitBox
szha commented on issue #8340: Fill optimizations
URL: https://github.com/apache/incubator-mxnet/pull/8340#issuecomment-337758666
 
 
   As long as `full` is still on the radar it's fine. Would you make the change 
to properly support `full` in that PR then?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on issue #8338: master branch cannot build on centos 7 with cuda-8.0

2017-10-18 Thread GitBox
ZiyueHuang commented on issue #8338: master branch cannot build on centos 7 
with cuda-8.0
URL: 
https://github.com/apache/incubator-mxnet/issues/8338#issuecomment-337790437
 
 
   Seems that @wa1618i got the same problem in 
https://github.com/apache/incubator-mxnet/issues/8333. Have you got any 
solutions so far? @wa1618i


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on issue #8337: mx.autograd.grad works or fails depending on use of slices

2017-10-18 Thread GitBox
ZiyueHuang commented on issue #8337: mx.autograd.grad works or fails depending 
on use of slices
URL: 
https://github.com/apache/incubator-mxnet/issues/8337#issuecomment-337792011
 
 
   `a`'s shape is `(4,)`.  Why "a's first dimension has length 1 and slicing it 
with 0:4 doesn't really make sense"? @piiswrong 
   
   @kpot Could you please use `da_sym.list_arguments() ` to see what are the 
arguments? Why `args=[mx.nd.ones_like(b), a]`?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #2066: Mxnet model visualisation

2017-10-18 Thread GitBox
tqchen closed issue #2066: Mxnet model visualisation
URL: https://github.com/apache/incubator-mxnet/issues/2066
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #2284: Compiling MxNet: Problems with CMake

2017-10-18 Thread GitBox
tqchen closed issue #2284: Compiling MxNet: Problems with CMake
URL: https://github.com/apache/incubator-mxnet/issues/2284
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #8340: Fill optimizations

2017-10-18 Thread GitBox
reminisce commented on a change in pull request #8340: Fill optimizations
URL: https://github.com/apache/incubator-mxnet/pull/8340#discussion_r145600607
 
 

 ##
 File path: src/operator/tensor/init_op.h
 ##
 @@ -164,19 +164,38 @@ inline bool InitStorageType(const nnvm::NodeAttrs& attrs,
   return true;
 }
 
+/*! \brief Fill output with a scalar integer value */
 template
 void FillCompute(const nnvm::NodeAttrs& attrs,
  const OpContext& ctx,
  const std::vector& inputs,
  const std::vector& req,
  const std::vector& outputs) {
-  using namespace mshadow;
-  using namespace mshadow::expr;
-  Stream *s = ctx.get_stream();
-  MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
-Tensor out = outputs[0].FlatTo1D(s);
-ASSIGN_DISPATCH(out, req[0], scalar(value));
-  });
+  if (req[0] != kNullOp) {
+mshadow::Stream *s = ctx.get_stream();
+MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+  mxnet_op::Kernel::Launch(s,
+ outputs[0].Size(),
+ 
outputs[0].dptr());
+});
+  }
+}
+
+/*! \brief Fast CPU fill-zero version using memset */
+template<>
+inline void FillCompute(const nnvm::NodeAttrs& attrs,
+const OpContext& ctx,
+const std::vector& inputs,
+const std::vector& req,
+const std::vector& outputs) {
+  if (req[0] != kNullOp) {
+const size_t size = outputs[0].Size();
+if (size) {
+  MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+memset(outputs[0].dptr(), 0, size * sizeof(DType));
 
 Review comment:
   `outputs[0].dptr_` is more efficient here than `outputs[0].dptr()`.
   Question: How much faster is this compared to the original implementation of 
filling up an `TBlob`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dslate opened a new issue #8344: Broken link for "MXNet R Reference Manual"?

2017-10-18 Thread GitBox
dslate opened a new issue #8344: Broken link for "MXNet R Reference Manual"?
URL: https://github.com/apache/incubator-mxnet/issues/8344
 
 
   On the MXNet page for the R API 
(http://mxnet.incubator.apache.org/api/r/index.html) the link to the "MXNet R 
Reference Manual" seems to point to a document called "mxnet-test.pdf", whose 
contents are titled "MXNet Documentation Release 0.0.8" and seem to be for the 
Julia language, not R.
   
   Is this link incorrect, or am I just confused?  I am looking for the MXNet R 
API.
   
   Thanks.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ykim362 commented on issue #7931: MKL-DNN integration: request for reviews

2017-10-18 Thread GitBox
ykim362 commented on issue #7931: MKL-DNN integration: request for reviews
URL: https://github.com/apache/incubator-mxnet/pull/7931#issuecomment-337757546
 
 
   @szha @piiswrong MKL-DNN doesn't support fp64(double) data type. Do you 
think this is an issue? The library team is more focusing on adding lower 
precisions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vinig opened a new issue #8231: [MXNet 0.11.0 + RPi 3 + Python 2.7] ndarray unit test fails

2017-10-18 Thread GitBox
vinig opened a new issue #8231: [MXNet 0.11.0 + RPi 3 + Python 2.7] ndarray 
unit test fails
URL: https://github.com/apache/incubator-mxnet/issues/8231
 
 
   ## Description
   I'm trying to build MXNet (0.11.0) for Raspberry Pi 3 with Python 2.7, 
OpenBLAS, OPENCV and LAPACK. (cross-compiled MXNet on RHEL)
   When I run unit tests (tests/python/unittest), 
test_ndarray.test_ndarray_slice fails for AssertionError (check Error Message 
section). I upgraded numpy and scipy version, since debian package manager was 
installing older versions, which were not compatible with tests. Current numpy 
version is 1.13.3 and scipy version is 1.19.1. Version upgrade resolved other 
unit tests failures except this one. It is strange because none of the 
functionality is broken but the arrays are different. (check the last section) 
How is that happening?
   
   My question is what is the correct set of versions for various dependencies 
to build and use MXNet for RPi 3?
   My aim is to get all the unit tests working for the MXNet version 0.11.0 on 
RPi 3.
   
   ## Environment info
   
   ```
   --Python Info--
   ('Version  :', '2.7.9')
   ('Compiler :', 'GCC 4.9.2')
   ('Build:', ('default', 'Sep 17 2016 20:26:04'))
   ('Arch :', ('32bit', 'ELF'))
   Pip Info---
   ('Version  :', '1.5.6')
   ('Directory:', '/usr/lib/python2.7/dist-packages/pip')
   --MXNet Info---
   ('Version  :', '0.11.0')
   ('Directory:', 
'/usr/local/lib/python2.7/dist-packages/mxnet-0.11.0-py2.7.egg/mxnet')
   Traceback (most recent call last):
 File "diagnose.py", line 108, in check_mxnet
   with open(commit_hash, 'r') as f:
   IOError: [Errno 2] No such file or directory: 
'/usr/local/lib/python2.7/dist-packages/mxnet-0.11.0-py2.7.egg/mxnet/COMMIT_HASH'
   
   --System Info--
   ('Platform :', 'Linux-4.9.35-v7+-armv7l-with-debian-8.0')
   ('system   :', 'Linux')
   ('node :', 'raspberrypi')
   ('release  :', '4.9.35-v7+')
   ('version  :', '#1014 SMP Fri Jun 30 14:47:43 BST 2017')
   --Hardware Info--
   ('machine  :', 'armv7l')
   ('processor:', '')
   Architecture:  armv7l
   Byte Order:Little Endian
   CPU(s):4
   On-line CPU(s) list:   0-3
   Thread(s) per core:1
   Core(s) per socket:4
   Socket(s): 1
   Model name:ARMv7 Processor rev 4 (v7l)
   CPU max MHz:   1200.
   CPU min MHz:   600.
   --Network Test--
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0101 
sec, LOAD: 0.5146 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0095 sec, LOAD: 
0.2694 sec.
   Timing for FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 DNS: 0.0456 sec, LOAD: 0.1679 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0166 sec, 
LOAD: 0.0695 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0106 sec, LOAD: 
0.0516 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0467 sec, LOAD: 
0.2191 sec.
   ```
   
   Package used (Python/R/Scala/Julia): Python
   
   ## Build info
   
   Compiler (gcc/clang/mingw/visual studio): gcc version 4.9.2 (Raspbian 
4.9.2-10) (target: arm-linux-gnueabihf)
   
   MXNet commit hash: a5edbf94094581ee27157eae4f2113115a3994e7
   
   Build config:
   OpenBLAS build on Pi, installed and ported to RHEL for cross-compilation:
   make FC=gfortran -j4
   
   config.mk
   USE_PROFILER=1
   ADD_LDFLAGS=-L/path/to/openblas/ext/lib /path/to/static/libopenblas.a
   ADD_CFLAGS=-I/path/to/openblas/ext/include
   USE_BLAS=openblas
   USE_OPENCV=1
   USE_OPENMP=0
   USE_LAPACK=1
   USE_LAPACK_PATH=/path/to/lapack/static/lib
   
   MXNet installation depends on following libraries:
   librt.so.1
   libopencv_dnn.so.3.3
   libopencv_ml.so.3.3
   libopencv_objdetect.so.3.3
   libopencv_shape.so.3.3
   libopencv_stitching.so.3.3
   libopencv_superres.so.3.3
   libopencv_videostab.so.3.3
   libopencv_calib3d.so.3.3
   libopencv_features2d.so.3.3
   libopencv_highgui.so.3.3
   libopencv_videoio.so.3.3
   libopencv_imgcodecs.so.3.3
   libopencv_video.so.3.3
   libopencv_photo.so.3.3
   libopencv_imgproc.so.3.3
   libopencv_flann.so.3.3
   libopencv_core.so.3.3
   libstdc++.so.6
   libm.so.6
   libgcc_s.so.1
   libpthread.so.0
   libc.so.6
   ld-linux-armhf.so.3
   libopenblas.a
   liblapack.a
   
   ## Error Message:
   ```
   ==
   FAIL: test_ndarray.test_ndarray_slice
   --
   Traceback (most recent call last):
 File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
   self.test(*self.arg)
 File 

[GitHub] rahul003 opened a new pull request #8342: [WIP] 2bit gradient compression

2017-10-18 Thread GitBox
rahul003 opened a new pull request #8342: [WIP] 2bit gradient compression
URL: https://github.com/apache/incubator-mxnet/pull/8342
 
 
   ## Description ##
   Implements 2bit gradient compression by quantizing each value in gradient 
array to 2bits using two user specified thresholds, one for positive and one 
for negative values. 
   
   @eric-haibin-lin @piiswrong @reminisce @anirudh2290 @bhavinthaker @madjam 
@cjolivier01 
   Please review. This is a work in progress. I'm currently running this with 
different kind of models to get performance results.
   
   ### Important files to review
   Operator
   - two_bit_quantize-inl.h
   - two_bit_quantize.cc
   
   KVStore local
   - comm.h
   
   KVStore dist
   - kvstore_dist.h
   - kvstore_dist_server.h
   
   Documentation about gradient compression
   - kvstore.py
   - two_bit_quantize.cc
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage
   - [ ] For user-facing API changes, API doc string has been updated.
   - [ ] To my best knowledge, examples are either not affected by this change, 
or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] two-bit-quantize and dequantize operators
   - [ ] Reduce operation in kvstore_local / comm.h
   - [ ] Distributed kvstore changes at worker and server
   - [ ] Tests for operator, local kvstore, distributed kvstore with predefined 
and random data. The results have been compared with expected values by 
implementing this logic in python.
   - [ ] API changes for Kvstore, Module and Trainer in python
   
   ## Comments ##
   ### Problem 
   When training large scale deep learning models especially with distributed 
training, communication becomes a bottleneck for networks whose computation is 
not high compared to the communication. 
   
   ### Approach
   We can try to address this by quantizing the gradients before sending and 
dequantizing it at the receiver's end. The sender would retain the quantization 
error and add it to the next iteration, effectively delaying small updates to 
positions in the gradient. Specifically in this PR, currently 2bit quantization 
has been implemented.
   
   ### Two bit quantization
   Use two thresholds to quantize the data, one positive threshold and one 
negative threshold. Any positive value greater than or equal to the positive 
threshold is set to one value (say 01), any negative value lower than or equal 
to the negative threshold is set to second value (say 10), and others are set 
to third value (say 0). We need three values to represent data in this fashion 
and hence two bits. We understand this leads to one bit going waste, but that's 
an optimization for later, as it complicates the operators. The error in 
quantization is stored as residual and carried over to the next iteration. This 
is added in the next iteration to the gradient before quantizing. 
   An example below with thresholds of -2.0 and 2.0
   ![Quantization at work](https://i.imgur.com/AtBVg92.png)
   
   ### Format of compressed gradient
   The first two elements are the thresholds used for quantization. The third 
element is the size of the original array. These values are required to 
dequantize the gradient. Any element from the 4th element, represents 
compressed gradient. Each value from the 4th element, represents upto 16 
elements in the original array. For the example above, we get
   ```compr = [ -2.0, 2.0, 8, 6.1215606E-28]```
   Note that the binary representation of the last element is 
   ```00 01 00 10 01 00 00 10  ```
   
   ### Local kvstore
   When using local kvstore, gradients compression only happens when using 
device communication. When gradients are pushed, before summing them up 
(Reduce), quantization and dequantization happen.
   Example: Say we have 4 GPUs, and the gradients are being summed up on GPU0. 
Each device quantizes gradients, then sends quantized gradient to GPU0, which 
performs dequantization of this data before merging it with values from other 
GPUs. Note that here, there is no need to quantize gradients from GPU0 itself, 
but it is still being done so that there is no bias for the samples which were 
processed by GPU0. **Please let me know if this is not a good idea.**
   
   ### Dist kvstore
   When the set_compress method for kvstore is called, each worker sets those 
compress params and one worker sends these params to all servers. From then on, 
when before each value is pushed to the server, it is quantized. The server 
dequantizes the data and stores it as an array of the original size. When 
values are pulled from the server, it returns an array of the original size. 
The same happens when each server is handling shards of the data.
   
   ### Usage
   The reason I used a dictionary compress_params for the arguments was to 
ensure uniformity when we extend this 

[GitHub] kpot commented on issue #8337: mx.autograd.grad works or fails depending on use of slices

2017-10-18 Thread GitBox
kpot commented on issue #8337: mx.autograd.grad works or fails depending on use 
of slices
URL: 
https://github.com/apache/incubator-mxnet/issues/8337#issuecomment-337794186
 
 
   @ZiyueHuang `a`s first dimension is 4, and slicing it like `a[0:4]` is 
absolutely valid and I didn't care about effectiveness here. But after you 
asked, I tried different expressions. I tried different sizes of `a` (for 
example, `a = mx.nd.array([ [ 1, 2, 3, 4] ])` and slicing it in the expression 
as `a[0]`). None of that has worked. I still see the same error every time I 
use slicing.
   
   `da_sym.list_arguments()` returns `['', 'var0']`. One must be the head 
gradient for the chain rule and another one is a placeholder the  for variable 
`a`. That's why I used such arguments. Which is which I determined 
experimentally, since both have different shapes, and I could easilly check the 
result of `executor.forward()` knowing derivative `db / da = 4 * a`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #4003: Building a visualization tool for MXNet

2017-10-18 Thread GitBox
tqchen closed issue #4003: Building a visualization tool for MXNet
URL: https://github.com/apache/incubator-mxnet/issues/4003
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #3198: Check list for more operators

2017-10-18 Thread GitBox
tqchen closed issue #3198: Check list for more operators
URL: https://github.com/apache/incubator-mxnet/issues/3198
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #3917: how to use my own loss to compute grad?

2017-10-18 Thread GitBox
tqchen closed issue #3917: how to use my own loss to compute grad?
URL: https://github.com/apache/incubator-mxnet/issues/3917
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #5346: Amalgamation using weBLAS (JS + WebGL) ?

2017-10-18 Thread GitBox
tqchen closed issue #5346: Amalgamation using weBLAS (JS + WebGL) ?
URL: https://github.com/apache/incubator-mxnet/issues/5346
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #3987: How to set auxiliary state in Batchnorm manually ?

2017-10-18 Thread GitBox
tqchen closed issue #3987: How to set auxiliary state in Batchnorm manually ?
URL: https://github.com/apache/incubator-mxnet/issues/3987
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #3946: When predicting, does mxnet provide thread-safe interface?

2017-10-18 Thread GitBox
tqchen closed issue #3946: When predicting, does mxnet provide thread-safe 
interface?
URL: https://github.com/apache/incubator-mxnet/issues/3946
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new pull request #8345: Misc fixes for sparse distributed training

2017-10-18 Thread GitBox
eric-haibin-lin opened a new pull request #8345: Misc fixes for sparse 
distributed training
URL: https://github.com/apache/incubator-mxnet/pull/8345
 
 
   ## Description ##
   - As #8116 removes 
[wait_to_write](https://github.com/apache/incubator-mxnet/pull/8116/files#diff-0cd6fcb2cd941d4c4a829bb3d7ea3d63L274)
 when updating comm_buff, it's not safe to pass NDArray* to the callback for 
row_sparse_pull. Now changed to NDArray. 
   - Removed the usage of `mshadow::range` in `FillDnsZerosRspImpl` since 
`mshadow::range` uses float to calculate the tensor shape and is inaccurate for 
large shapes. 
   - Added unit test for pulling empty sparse weights
   - Removed wrong/misleading comments
   
   @bhavinthaker @madjam @rahul003 
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage
   - [ ] For user-facing API changes, API doc string has been updated.
   - [ ] To my best knowledge, examples are either not affected by this change, 
or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Intersting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on issue #8337: mx.autograd.grad works or fails depending on use of slices

2017-10-18 Thread GitBox
ZiyueHuang commented on issue #8337: mx.autograd.grad works or fails depending 
on use of slices
URL: 
https://github.com/apache/incubator-mxnet/issues/8337#issuecomment-337792011
 
 
   `a`'s shape is `(4,)`.  Why "a's first dimension has length 1 and slicing it 
with 0:4 doesn't really make sense"? @piiswrong 
   ```
   >>> import numpy as np
   >>> a=np.ones((3,))
   >>> a[0:2]
   array([ 1.,  1.])
   ```
   @kpot Could you please use `da_sym.list_arguments() ` to see what are the 
arguments? Why `args=[mx.nd.ones_like(b), a]`?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #3006: [R] R package roadmap

2017-10-18 Thread GitBox
tqchen closed issue #3006: [R] R package roadmap
URL: https://github.com/apache/incubator-mxnet/issues/3006
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #1873: Implement a model serving framework

2017-10-18 Thread GitBox
tqchen closed issue #1873: Implement a model serving framework
URL: https://github.com/apache/incubator-mxnet/issues/1873
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #1281: how to prepare data for mxnet.io.NDArrayIter

2017-10-18 Thread GitBox
tqchen closed issue #1281: how to prepare data for mxnet.io.NDArrayIter
URL: https://github.com/apache/incubator-mxnet/issues/1281
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #1784: Bug in ccSGD optimizer

2017-10-18 Thread GitBox
tqchen closed issue #1784: Bug in ccSGD optimizer
URL: https://github.com/apache/incubator-mxnet/issues/1784
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #2944: v1.0 Stable Release TODO List

2017-10-18 Thread GitBox
tqchen closed issue #2944: v1.0 Stable Release TODO List
URL: https://github.com/apache/incubator-mxnet/issues/2944
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #3084: Scala Package for v1.0 TODO List

2017-10-18 Thread GitBox
tqchen closed issue #3084: Scala Package for v1.0 TODO List
URL: https://github.com/apache/incubator-mxnet/issues/3084
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #8343: [CMAKE] Cmake changes, upgrade training test so it converge

2017-10-18 Thread GitBox
tqchen commented on issue #8343: [CMAKE] Cmake changes, upgrade training test 
so it converge
URL: https://github.com/apache/incubator-mxnet/pull/8343#issuecomment-337801113
 
 
   failure in the tests appears due to the missing ci machine. need retrigger 
when they are up


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8340: Fill optimizations

2017-10-18 Thread GitBox
szha commented on issue #8340: Fill optimizations
URL: https://github.com/apache/incubator-mxnet/pull/8340#issuecomment-337758666
 
 
   As long as `full` is still on the radar it's fine.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhangqianghd closed issue #8309: asnumpy is slowly ,how can I speed up it?

2017-10-18 Thread GitBox
zhangqianghd closed issue #8309: asnumpy is slowly ,how can I speed up it?
URL: https://github.com/apache/incubator-mxnet/issues/8309
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhangqianghd commented on issue #8309: asnumpy is slowly ,how can I speed up it?

2017-10-18 Thread GitBox
zhangqianghd commented on issue #8309: asnumpy is slowly ,how can I speed up it?
URL: 
https://github.com/apache/incubator-mxnet/issues/8309#issuecomment-337780811
 
 
   When I add wait_to_read,I have found the detail .
   Thanks All.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen opened a new pull request #8343: [CMAKE] Cmake changes, upgrade training test so it converge

2017-10-18 Thread GitBox
tqchen opened a new pull request #8343: [CMAKE] Cmake changes, upgrade training 
test so it converge
URL: https://github.com/apache/incubator-mxnet/pull/8343
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage
   - [ ] For user-facing API changes, API doc string has been updated.
   - [ ] To my best knowledge, examples are either not affected by this change, 
or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Intersting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on issue #8337: mx.autograd.grad works or fails depending on use of slices

2017-10-18 Thread GitBox
ZiyueHuang commented on issue #8337: mx.autograd.grad works or fails depending 
on use of slices
URL: 
https://github.com/apache/incubator-mxnet/issues/8337#issuecomment-337796480
 
 
   Sorry, I just paste the codes you posted at first comment, without adding 
slice.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #3200: [OP] Array manipulation

2017-10-18 Thread GitBox
tqchen closed issue #3200: [OP] Array manipulation
URL: https://github.com/apache/incubator-mxnet/issues/3200
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #3487: [PERF] Call for NN Layer Kernel Improvement

2017-10-18 Thread GitBox
tqchen closed issue #3487: [PERF] Call for NN Layer Kernel Improvement
URL: https://github.com/apache/incubator-mxnet/issues/3487
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #3523: Operator documents issues tracking

2017-10-18 Thread GitBox
tqchen closed issue #3523: Operator documents issues tracking
URL: https://github.com/apache/incubator-mxnet/issues/3523
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #3504: [RFC] Documentation of MXNet

2017-10-18 Thread GitBox
tqchen closed issue #3504:  [RFC] Documentation of MXNet
URL: https://github.com/apache/incubator-mxnet/issues/3504
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #3724: asnumpy() of NDArray @cpu halted

2017-10-18 Thread GitBox
tqchen closed issue #3724: asnumpy() of NDArray @cpu  halted
URL: https://github.com/apache/incubator-mxnet/issues/3724
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #3201: [OP] Mathematical functions

2017-10-18 Thread GitBox
tqchen closed issue #3201: [OP] Mathematical functions
URL: https://github.com/apache/incubator-mxnet/issues/3201
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] moakra commented on issue #8248: A3C code does not learn

2017-10-18 Thread GitBox
moakra commented on issue #8248: A3C code does not learn
URL: 
https://github.com/apache/incubator-mxnet/issues/8248#issuecomment-337756962
 
 
   I was talking about this code: 
https://github.com/apache/incubator-mxnet/tree/master/example/reinforcement-learning/a3c
  
   
   Now it works.
   
   Thank you!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] liuzhi136 opened a new issue #8341: Training error always fluctuates and doesn't decrease.

2017-10-18 Thread GitBox
liuzhi136 opened a new issue #8341: Training error always fluctuates and 
doesn't decrease.
URL: https://github.com/apache/incubator-mxnet/issues/8341
 
 
   I had implemented a model which defined in "Training RNNs as Fast as CNNs". 
The structure I wrote as below:
   
![1](https://user-images.githubusercontent.com/13534043/31749188-fd014a78-b4aa-11e7-82e5-10732df9cb2a.png)
   
![2](https://user-images.githubusercontent.com/13534043/31749189-fd3d7e30-b4aa-11e7-95a2-89fc72078226.png)
   
![3](https://user-images.githubusercontent.com/13534043/31749190-fd72ba0a-b4aa-11e7-95cc-1145d1da7b5c.png)
   The training and validation error looks like:
   ![screenshot from 2017-10-19 
08-50-15](https://user-images.githubusercontent.com/13534043/31749202-149d4f9c-b4ab-11e7-8f80-7b7ea7a1cd0f.png)
   The training data I used:
   
https://raw.githubusercontent.com/harvardnlp/sent-conv-torch/master/data/rt-polarity.all
   I really don't know why this model fluctuates all the time. Does any has any 
idea for this?
   I really need help to solve this immediately. Any help for this will be 
appreciated!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kaisark commented on issue #6846: Check failed: err == cudaSuccess (8 vs. 0) Name: MapPlanKernel ErrStr:invalid device function

2017-10-18 Thread GitBox
kaisark commented on issue #6846: Check failed: err == cudaSuccess (8 vs. 0) 
Name: MapPlanKernel ErrStr:invalid device function
URL: 
https://github.com/apache/incubator-mxnet/issues/6846#issuecomment-337768151
 
 
   I was able to build mxnet on my TX1 with gpu today. I did add 53 in the 
Makefile for thoroughness.
   https://mxnet.incubator.apache.org/get_started/install.html
   Devices -> Nvidia Jetson TX2
   Linux -> Python -> GPU -> Build from Source
   
   nvidia@tegra-ubuntu:~/mxnet/python$ uname -a
   Linux tegra-ubuntu 4.4.38-jetsonbot-doc-v0.3 #1 SMP PREEMPT Thu Oct 5 
15:58:24 EDT 2017 aarch64 aarch64 aarch64 GNU/Linux
   
   nvidia@tegra-ubuntu:~/mxnet/python$ make USE_OPENCV=1 USE_BLAS=openblas 
USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1
   
   nvidia@tegra-ubuntu:~/mxnet/python$ pip list -e
   mxnet (0.11.1, /home/nvidia/mxnet/python)
   
   nvidia@tegra-ubuntu:~/mxnet/python$ python
   Python 2.7.12 (default, Nov 19 2016, 06:48:10) 
   [GCC 5.4.0 20160609] on linux2
   Type "help", "copyright", "credits" or "license" for more information.
   >>> import mxnet as mx
   >>> a = mx.nd.ones((2, 3), mx.gpu())
   >>> b = a * 2 + 1
   >>> b.asnumpy()
   array([[ 3.,  3.,  3.],
  [ 3.,  3.,  3.]], dtype=float32)
   >>> 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on issue #8338: master branch cannot build on centos 7 with cuda-8.0

2017-10-18 Thread GitBox
ZiyueHuang commented on issue #8338: master branch cannot build on centos 7 
with cuda-8.0
URL: 
https://github.com/apache/incubator-mxnet/issues/8338#issuecomment-337789562
 
 
   Thanks for your suggestions.
   
   Seems wired. It can build successfully on ubuntu. Why centos cannot?
   
   Yes, the error messages are complete. 
   
   Warnings are not fixed by reverting `smooth_l1`, I change these codes to
   ```
   struct smooth_l1_loss {
  // a is x, b is sigma2
  template
  MSHADOW_XINLINE static DType Map(DType a, DType b) {
b *= b;
if (a > (1.0f) / b) {
  return a - (0.5f) / b;
} else if (a < (-1.0f) / b) {
  return -a - (0.5f) / b;
} else {
  return (0.5f) * a * a * b;
}
  }
};  // struct smooth_l1_loss
   
   struct smooth_l1_gradient {
  // a is x, b is sigma2
  template
  MSHADOW_XINLINE static DType Map(DType a, DType b) {
b *= b;
if (a > (1.0f) / b) {
  return (1.0f);
} else if (a < (-1.0f) / b) {
  return DType(-1.0f);
} else {
  return b * a;
}
  }
};  // struct smooth_l1_derivative
   ```
   Below is the whole chunk of warning and error messages,
   ```
   src/operator/tensor/./../mshadow_op.h(1093): warning: floating-point value 
does not fit in required integral type
 detected during:
   instantiation of "DType 
mxnet::op::mshadow_op::smooth_l1_gradient::Map(DType, DType) [with 
DType=uint8_t]"
   /home/hanfeng/zyh/build/mshadow/mshadow/././expr_engine-inl.h(131): here
   instantiation of "DType 
mshadow::expr::Plan, 
DType>::Eval(mshadow::index_t, mshadow::index_t) const [with 
OP=mxnet::op::mshadow_op::smooth_l1_gradient, TA=mshadow::Tensor, TB=mshadow::expr::ScalarExp, etype=1, DType=uint8_t]"
   /home/hanfeng/zyh/build/mshadow/mshadow/././expr_engine-inl.h(131): here
   instantiation of "DType 
mshadow::expr::Plan, 
DType>::Eval(mshadow::index_t, mshadow::index_t) const [with 
OP=mshadow::op::mul, TA=mshadow::Tensor, 
TB=mshadow::expr::BinaryMapExp, mshadow::expr::ScalarExp, 
uint8_t, 1>, etype=1, DType=uint8_t]"
   /home/hanfeng/zyh/build/mshadow/mshadow/././././cuda/tensor_gpu-inl.cuh(75): 
here
   instantiation of "void 
mshadow::cuda::MapPlanProc(DstPlan, 
mshadow::index_t, mshadow::Shape<2>, Plan, int) [with 
Saver=mshadow::sv::saveto, 
DstPlan=mshadow::expr::Plan, uint8_t>, 
Plan=mshadow::expr::Plan, 
mshadow::expr::BinaryMapExp, mshadow::expr::ScalarExp, 
uint8_t, 1>, uint8_t, 1>, uint8_t>, block_dim_bits=8]"
   /home/hanfeng/zyh/build/mshadow/mshadow/././././cuda/tensor_gpu-inl.cuh(83): 
here
   instantiation of "void 
mshadow::cuda::MapPlanKernel(DstPlan, 
mshadow::index_t, mshadow::Shape<2>, Plan) [with Saver=mshadow::sv::saveto, 
block_dim_bits=8, DstPlan=mshadow::expr::Plan, uint8_t>, 
Plan=mshadow::expr::Plan, 
mshadow::expr::BinaryMapExp, mshadow::expr::ScalarExp, 
uint8_t, 1>, uint8_t, 1>, uint8_t>]"
   
/home/hanfeng/zyh/build/mshadow/mshadow/././././cuda/tensor_gpu-inl.cuh(109): 
here
   instantiation of "void 
mshadow::cuda::MapPlan(mshadow::expr::Plan, const mshadow::expr::Plan &, mshadow::Shape<2>, cudaStream_t) 
[with Saver=mshadow::sv::saveto, DstExp=mshadow::Tensor, E=mshadow::expr::BinaryMapExp, 
mshadow::expr::BinaryMapExp, mshadow::expr::ScalarExp, 
uint8_t, 1>, uint8_t, 1>, DType=uint8_t]"
   /home/hanfeng/zyh/build/mshadow/mshadow/./tensor_gpu-inl.h(115): here
   instantiation of "void 
mshadow::MapExp(mshadow::TRValue *, const mshadow::expr::Exp &) [with 
Saver=mshadow::sv::saveto, R=mshadow::Tensor, dim=1, 
DType=uint8_t, E=mshadow::expr::BinaryMapExp, 
mshadow::expr::BinaryMapExp, mshadow::expr::ScalarExp, 
uint8_t, 1>, 

[GitHub] tqchen closed issue #792: Support a parameter for maximum memory usage

2017-10-18 Thread GitBox
tqchen closed issue #792: Support a parameter for maximum memory usage
URL: https://github.com/apache/incubator-mxnet/issues/792
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #4783: [v0.9.3] Amalgamation for Android broken

2017-10-18 Thread GitBox
tqchen closed issue #4783: [v0.9.3] Amalgamation for Android broken
URL: https://github.com/apache/incubator-mxnet/issues/4783
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #4989: How to use mxnet?

2017-10-18 Thread GitBox
tqchen closed issue #4989: How to use mxnet?
URL: https://github.com/apache/incubator-mxnet/issues/4989
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #3509: [RELEASE] Announcing v0.9 Release Candidate 1

2017-10-18 Thread GitBox
tqchen closed issue #3509: [RELEASE] Announcing v0.9 Release Candidate 1
URL: https://github.com/apache/incubator-mxnet/issues/3509
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #4851: error: ISO C++ forbids comparison between pointer and integer

2017-10-18 Thread GitBox
tqchen closed issue #4851: error: ISO C++ forbids comparison between pointer 
and integer
URL: https://github.com/apache/incubator-mxnet/issues/4851
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #6996: Label smoothing for SoftmaxOutput / cross-entropy loss

2017-10-18 Thread GitBox
tqchen commented on issue #6996: Label smoothing for SoftmaxOutput / 
cross-entropy loss
URL: 
https://github.com/apache/incubator-mxnet/issues/6996#issuecomment-337798214
 
 
   close as it is enabled


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #1237: can mxnet provide the sparse gradient update for word embedding

2017-10-18 Thread GitBox
tqchen closed issue #1237: can mxnet provide the sparse gradient update for  
word embedding 
URL: https://github.com/apache/incubator-mxnet/issues/1237
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #5807: Model Parallelism

2017-10-18 Thread GitBox
tqchen closed issue #5807: Model Parallelism
URL: https://github.com/apache/incubator-mxnet/issues/5807
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen closed issue #6996: Label smoothing for SoftmaxOutput / cross-entropy loss

2017-10-18 Thread GitBox
tqchen closed issue #6996: Label smoothing for SoftmaxOutput / cross-entropy 
loss
URL: https://github.com/apache/incubator-mxnet/issues/6996
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook opened a new issue #8335: Performance of MXNet on Windows is lower than that on Linux by 15%-20%

2017-10-18 Thread GitBox
chinakook opened a new issue #8335: Performance of MXNet on Windows  is lower 
than that on Linux by 15%-20%
URL: https://github.com/apache/incubator-mxnet/issues/8335
 
 
   I've been testing MXNet on both Win10 and Ubuntu 16.04 in a long time. I 
found that the performance of Windows is lower than Linux by 15%-20% on both 
GPU and CPU contexts. So I wonder what makes the performance gap and I hope 
someone can solve this problem. My config is like following:
   MXNet on Ubuntu:
   MKL blas/CUDA8/CUDNN6/OpenCV 2.4.13/no jemalloc, no gperftools/building with 
Makefile
   MXNet on Windows:
   MKL blas/CUDA8/CUDNN6/OpenCV 2.4.13/no jemalloc, no gperftools/building with 
CMake


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] benqua commented on issue #8297: [scala] Make accuracy idependant of output size (fix #8226)

2017-10-18 Thread GitBox
benqua commented on issue #8297: [scala] Make accuracy idependant of output 
size (fix #8226)
URL: https://github.com/apache/incubator-mxnet/pull/8297#issuecomment-337598650
 
 
   @javelinjs , you're right, it changes the definition of accuracy for 
output.size > 1.
   What is the exact definition of Accuracy? I couldn't find a clear definition.
   
   This change provides a definition of accuracy that match the one from 
wikipedia for binary classification, that says: 
   _the accuracy is the proportion of true results (both true positives and 
true negatives) among the total number of cases examined_
   
(https://en.wikipedia.org/wiki/Accuracy_and_precision#In_binary_classification).
   
   It seems weird (at least to me :) ) that the accuracy depends on the output 
dimension and can grow to very large numbers. By dividing by the label 
dimension, we keep the accuracy between 0 and 1, which is the expected range of 
a "proportion".
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] benqua commented on issue #8297: [scala] Make accuracy idependant of output size (fix #8226)

2017-10-18 Thread GitBox
benqua commented on issue #8297: [scala] Make accuracy idependant of output 
size (fix #8226)
URL: https://github.com/apache/incubator-mxnet/pull/8297#issuecomment-337598650
 
 
   @javelinjs , you're right, it changes the definition of accuracy for 
output.size > 1.
   What is the exact definition of Accuracy? I couldn't find a clear definition.
   
   This change provides a definition of accuracy that match the one from 
wikipedia for binary classification, that says: 
   _the accuracy is the proportion of true results (both true positives and 
true negatives) among the total number of cases examined_
   
(https://en.wikipedia.org/wiki/Accuracy_and_precision#In_binary_classification).
   
   It seems weird (at least to me :) ) that the accuracy depends on the output 
dimension and can grow to very large numbers. By dividing by the label 
dimension, we keep the accuracy between 0 and 1, which is the expected range of 
a "proportion".
   
   If we change sumMetric to  Double, should we do it only for the value stored 
internally and keep float in the EvalMetric API?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mseeger commented on issue #8323: clean up math operators

2017-10-18 Thread GitBox
mseeger commented on issue #8323: clean up math operators
URL: https://github.com/apache/incubator-mxnet/pull/8323#issuecomment-337591796
 
 
   I am looking at pages like this:
   
   
https://stackoverflow.com/questions/15911890/overriding-return-type-in-function-template-specialization
   
   It seems to be complicated to change the return type in the specialization.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook opened a new issue #8336: Building with libjpeg-turbo error

2017-10-18 Thread GitBox
chinakook opened a new issue #8336: Building with libjpeg-turbo error
URL: https://github.com/apache/incubator-mxnet/issues/8336
 
 
   ## Description
   I've installed libjpeg-turbo by "sudo apt install libjpeg-turbo8-dev" on 
Ubuntu 16.04
   
   ## Environment info (Required)
   
   ```
   MXNet-master on Ubuntu 16.04
   ```
   
   ## Error Message:
   ```
   /usr/bin/ld: 
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/libturbojpeg.a(libturbojpeg_la-turbojpeg.o):
 relocation R_X86_64_32 against `.data' can not be used when making a shared 
object; recompile with -fPIC
   /usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/libturbojpeg.a: 
error adding symbols: Bad value
   collect2: error: ld returned 1 exit status
   Makefile:384: recipe for target 'lib/libmxnet.so' failed
   make: *** [lib/libmxnet.so] Error 1
   make: *** Waiting for unfinished jobs
   ```
   
   ## Minimum reproducible example
   Just set USE_LIBJPEG_TURBO = 1 and type "make -j"
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tdomhan opened a new pull request #8334: Bugfix: Python 3 compatiblity during optimizer serialization.

2017-10-18 Thread GitBox
tdomhan opened a new pull request #8334: Bugfix: Python 3 compatiblity during 
optimizer serialization.
URL: https://github.com/apache/incubator-mxnet/pull/8334
 
 
   ## Description ##
   In python 3 `pickle.dumps` returns a `bytes` object (instead of the `str` 
object in python 2). The `c_str` function however expects a string (on which it 
will call `.encode`. The fix is to detect if the return value of `pickle.dumps` 
is not a `str`. This is only true in python 3. If so we manually convert the 
`bytes` object to `str`.
   
   This is the full stack trace before the change:
   ```
   [ERROR:__main__] UNCAUGHT EXCEPTION
   Traceback (most recent call last):
 File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
   "__main__", mod_spec)
 File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
   exec(code, run_globals)
 File "/opt/conda/lib/python3.6/site-packages/sockeye/train.py", line 450, 
in 
   main()
 File "/opt/conda/lib/python3.6/site-packages/sockeye/train.py", line 446, 
in main
   mxmonitor_stat_func=args.monitor_stat_func)
 File "/opt/conda/lib/python3.6/site-packages/sockeye/training.py", line 
218, in fit
   self.module.init_optimizer(kvstore=kvstore, optimizer=optimizer, 
optimizer_params=optimizer_params)
 File 
"/opt/conda/lib/python3.6/site-packages/mxnet-0.12.0-py3.6.egg/mxnet/module/bucketing_module.py",
 line 385, in init_optimizer
   force_init=force_init)
 File 
"/opt/conda/lib/python3.6/site-packages/mxnet-0.12.0-py3.6.egg/mxnet/module/module.py",
 line 531, in init_optimizer
   kvstore.set_optimizer(self._optimizer)
 File 
"/opt/conda/lib/python3.6/site-packages/mxnet-0.12.0-py3.6.egg/mxnet/kvstore.py",
 line 394, in set_optimizer
   self._send_command_to_servers(0, optim_str)
 File 
"/opt/conda/lib/python3.6/site-packages/mxnet-0.12.0-py3.6.egg/mxnet/kvstore.py",
 line 532, in _send_command_to_servers
   self.handle, mx_uint(head), c_str(body)))
 File 
"/opt/conda/lib/python3.6/site-packages/mxnet-0.12.0-py3.6.egg/mxnet/base.py", 
line 189, in c_str
   return ctypes.c_char_p(string.encode('utf-8'))
   AttributeError: 'bytes' object has no attribute 'encode'
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mseeger commented on issue #8323: clean up math operators

2017-10-18 Thread GitBox
mseeger commented on issue #8323: clean up math operators
URL: https://github.com/apache/incubator-mxnet/pull/8323#issuecomment-337589379
 
 
   It is funny this should work. If you define float name(DType a) in the 
templated class, I thought you cannot specialize this to be double name(double 
a) in the specialization.
   
   We just have to be really careful not to fall back to the old state, namely 
that everything is computed in float, even for DType = double. The unit tests 
should indicate this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] javelinjs commented on issue #8297: [scala] Make accuracy idependant of output size (fix #8226)

2017-10-18 Thread GitBox
javelinjs commented on issue #8297: [scala] Make accuracy idependant of output 
size (fix #8226)
URL: https://github.com/apache/incubator-mxnet/pull/8297#issuecomment-337584492
 
 
   Thanks @benqua , I think a better way is to change `sumMetric` to `Double`. 
The fix here changes the definition of Acc.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mseeger commented on issue #8323: clean up math operators

2017-10-18 Thread GitBox
mseeger commented on issue #8323: clean up math operators
URL: https://github.com/apache/incubator-mxnet/pull/8323#issuecomment-337660985
 
 
   Independent of this PR, I'd like to do one more change.
   I am not happy with many of the backward expressions: different to the 
forward ones, they compute arithmetic expressions in DType and not in float.
   
   This can be fixed by introducing one more function: square, which computes a 
* a, either in float or in double.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #8337: mx.autograd.grad works or fails depending on use of slices

2017-10-18 Thread GitBox
piiswrong commented on issue #8337: mx.autograd.grad works or fails depending 
on use of slices
URL: 
https://github.com/apache/incubator-mxnet/issues/8337#issuecomment-337666156
 
 
   a's first dimension has length 1. Slicing it with 0:4 doesn't really make 
sense. It should raise an error.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #8323: clean up math operators

2017-10-18 Thread GitBox
piiswrong commented on issue #8323: clean up math operators
URL: https://github.com/apache/incubator-mxnet/pull/8323#issuecomment-337674663
 
 
   Its not specialization. Its overloading.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #8323: clean up math operators

2017-10-18 Thread GitBox
piiswrong commented on issue #8323: clean up math operators
URL: https://github.com/apache/incubator-mxnet/pull/8323#issuecomment-337674663
 
 
   Its not specialization. Its overloading. Compiler prefers exact match over 
templates


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mseeger commented on issue #8338: master branch cannot build on centos 7 with cuda-8.0

2017-10-18 Thread GitBox
mseeger commented on issue #8338: master branch cannot build on centos 7 with 
cuda-8.0
URL: 
https://github.com/apache/incubator-mxnet/issues/8338#issuecomment-337678983
 
 
   The line in question in tensor_gpu-inl.cuh is:
   
   Saver::Save(dst.REval(y, x), exp.Eval(y, x));
   
   Is this the complete error you are getting? Why can't I see what DstPlan or 
Plan are? Or for what arguments dst or exp this is called when it fails?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mseeger commented on issue #8323: clean up math operators

2017-10-18 Thread GitBox
mseeger commented on issue #8323: clean up math operators
URL: https://github.com/apache/incubator-mxnet/pull/8323#issuecomment-337679717
 
 
   Well if this works, it is fine. Just make sure that if the arg is double, 
then the computation is done in double. But I think my new unit tests would 
likely fail otherwise.
   
   I'd like to give this another go afterwards (square function), but I wait 
until this one is in.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #8334: Bugfix: Python 3 compatiblity during optimizer serialization.

2017-10-18 Thread GitBox
piiswrong commented on issue #8334: Bugfix: Python 3 compatiblity during 
optimizer serialization.
URL: https://github.com/apache/incubator-mxnet/pull/8334#issuecomment-337664167
 
 
   You should use py_str from base for this 
https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/base.py#L43


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin closed pull request #8307: [sparse] Remove usage of arange in FillDnsZerosRspImpl

2017-10-18 Thread GitBox
eric-haibin-lin closed pull request #8307: [sparse] Remove usage of arange in 
FillDnsZerosRspImpl
URL: https://github.com/apache/incubator-mxnet/pull/8307
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/tensor/init_op.h b/src/operator/tensor/init_op.h
index 97bda906c6..dfbdbbc93a 100644
--- a/src/operator/tensor/init_op.h
+++ b/src/operator/tensor/init_op.h
@@ -179,6 +179,13 @@ void FillCompute(const nnvm::NodeAttrs& attrs,
   });
 }
 
+struct PopulateFullIdxRspKernel {
+  template
+  MSHADOW_XINLINE static void Map(int i, IType* out) {
+KERNEL_ASSIGN(out[i], kWriteTo, i);
+  }
+};
+
 // Fill in the indices and values of a RowSparse NDArray to represent a zeros 
NDArray,
 // instead of the usual compact representation.
 template
@@ -192,21 +199,14 @@ inline void FillDnsZerosRspImpl(mshadow::Stream *s, 
NDArray *dst) {
 MSHADOW_IDX_TYPE_SWITCH(dst->aux_type(kIdx), IType, {
   auto num_rows = dst->shape()[0];
   dst->CheckAndAlloc({Shape1(num_rows)});
-  auto idx = dst->aux_data(kIdx).FlatTo1D(s);
+  auto idx = dst->aux_data(kIdx);
   auto val = dst->data();
   Kernel::Launch(s, val.Size(), val.dptr());
-  ASSIGN_DISPATCH(idx, kWriteTo, range(0, num_rows, 1, 1));
+  Kernel::Launch(s, num_rows, 
idx.dptr());
 });
   });
 }
 
-struct PopulateFullIdxRspKernel {
-  template
-  MSHADOW_XINLINE static void Map(int i, IType* out) {
-KERNEL_ASSIGN(out[i], kWriteTo, i);
-  }
-};
-
 // Fill full indices NDArray with zeros by updating the aux shape.
 template
 void PopulateFullIdxRspImpl(mshadow::Stream *s, NDArray *dst) {
diff --git a/tests/python/unittest/test_optimizer.py 
b/tests/python/unittest/test_optimizer.py
index 8666b9e714..1a26434015 100644
--- a/tests/python/unittest/test_optimizer.py
+++ b/tests/python/unittest/test_optimizer.py
@@ -232,6 +232,10 @@ def test_sgd():
 if dtype != np.float16:
 compare_optimizer(opt1(**kwarg), 
opt2(**kwarg), shape[:2],
   dtype, w_stype='csr', 
g_stype='csr')
+# test optimizer with a big shape
+big_shape = (54686454, 1)
+kwarg = {'momentum': 0.9, 'wd': 0.05}
+compare_optimizer(opt1(**kwarg), opt2(**kwarg), big_shape, np.float32)
 
 class PySparseSGD(mx.optimizer.Optimizer):
 """python reference implemenation of sgd"""


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #8338: master branch cannot build on centos 7 with cuda-8.0

2017-10-18 Thread GitBox
piiswrong commented on issue #8338: master branch cannot build on centos 7 with 
cuda-8.0
URL: 
https://github.com/apache/incubator-mxnet/issues/8338#issuecomment-337669338
 
 
   @mseeger 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8330: ndarray issue

2017-10-18 Thread GitBox
szha commented on issue #8330: ndarray issue
URL: 
https://github.com/apache/incubator-mxnet/issues/8330#issuecomment-337669799
 
 
   Here's the path for CPU broadcast_greater.
   
https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/ndarray/ndarray.py#L293-L295
   
   
https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/ndarray/ndarray.py#L2637-L2697
   
   
https://github.com/apache/incubator-mxnet/blob/master/src/operator/tensor/elemwise_binary_broadcast_op_logic.cc#L67-L83
   
   
https://github.com/apache/incubator-mxnet/blob/master/src/operator/mshadow_op.h#L613-L618


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham opened a new issue #8339: data iterators tutorial errors

2017-10-18 Thread GitBox
aaronmarkham opened a new issue #8339: data iterators tutorial errors
URL: https://github.com/apache/incubator-mxnet/issues/8339
 
 
   ## Description
   Testing [this 
tutorial](https://github.com/apache/incubator-mxnet/blob/master/docs/tutorials/basic/data.md)
 on Python 3 shows an error.
   
   ## Environment info (Required)
   Python 3 env (reproduced on both Mac and DL AMI)
   
   ```
   What to do:
   1. Execute the tutorial.
   
   ```
   
   ## Error Message:
   
   ```
   ---
   ValueErrorTraceback (most recent call last)
in ()
 9 
10 mod = mx.mod.Module(symbol=net)
   ---> 11 mod.fit(data_iter, num_epoch=5)
   
   
~/conda/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/module/base_module.py 
in fit(self, train_data, eval_data, eval_metric, epoch_end_callback, 
batch_end_callback, kvstore, optimizer, optimizer_params, eval_end_callback, 
eval_batch_end_callback, initializer, arg_params, aux_params, allow_missing, 
force_rebind, force_init, begin_epoch, num_epoch, validation_metric, monitor)
   485 if monitor is not None:
   486 monitor.tic()
   --> 487 self.forward_backward(data_batch)
   488 self.update()
   489 try:
   
   
~/conda/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/module/base_module.py 
in forward_backward(self, data_batch)
   189 def forward_backward(self, data_batch):
   190 """A convenient function that calls both ``forward`` and 
``backward``."""
   --> 191 self.forward(data_batch, is_train=True)
   192 self.backward()
   193 
   
   ~/conda/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/module/module.py in 
forward(self, data_batch, is_train)
   592 new_lshape = None
   593 
   --> 594 self.reshape(new_dshape, new_lshape)
   595 
   596 self._exec_group.forward(data_batch, is_train)
   
   ~/conda/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/module/module.py in 
reshape(self, data_shapes, label_shapes)
   455 assert self.binded
   456 self._data_shapes, self._label_shapes = _parse_data_desc(
   --> 457 self.data_names, self.label_names, data_shapes, 
label_shapes)
   458 
   459 self._exec_group.reshape(self._data_shapes, 
self._label_shapes)
   
   
~/conda/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/module/base_module.py 
in _parse_data_desc(data_names, label_names, data_shapes, label_shapes)
69 """parse data_attrs into DataDesc format and check that names 
match"""
70 data_shapes = [x if isinstance(x, DataDesc) else DataDesc(*x) 
for x in data_shapes]
   ---> 71 _check_names_match(data_names, data_shapes, 'data', True)
72 if label_shapes is not None:
73 label_shapes = [x if isinstance(x, DataDesc) else 
DataDesc(*x) for x in label_shapes]
   
   
~/conda/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/module/base_module.py 
in _check_names_match(data_names, data_shapes, name, throw)
61 name, name, str(data_shapes), str(data_names))
62 if throw:
   ---> 63 raise ValueError(msg)
64 else:
65 warnings.warn(msg)
   
   ValueError: Data provided by data_shapes don't match names specified by 
data_names ([] vs. ['data'])
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mseeger commented on issue #8338: master branch cannot build on centos 7 with cuda-8.0

2017-10-18 Thread GitBox
mseeger commented on issue #8338: master branch cannot build on centos 7 with 
cuda-8.0
URL: 
https://github.com/apache/incubator-mxnet/issues/8338#issuecomment-337676796
 
 
   The warnings relate to smooth_l1_loss in mshadow_op.h. What I did there, is 
to replace constants like 0.5f by DType(0.5f), to make it look similar to 
elsewhere.
   
   This can be reverted.
   
   I am not enough an expert on CUDA or mshadow to understand what problems 
this is causing, and I also don't have a clue about "centos 7 with cuda-8.0". 
After all, lots of GPU unit tests work without any issues.
   
   But the first attempt would be to replace smooth_l1_loss and 
smooth_l1_gradient in mshadow_op.h by the old code, and see whether that helps.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kpot commented on issue #8337: mx.autograd.grad works or fails depending on use of slices

2017-10-18 Thread GitBox
kpot commented on issue #8337: mx.autograd.grad works or fails depending on use 
of slices
URL: 
https://github.com/apache/incubator-mxnet/issues/8337#issuecomment-337678517
 
 
   I'm afraid you miss the point.
   The failing code works fine being run using ordinary `.backward()` call:
   
   ```
   import mxnet as mx
   from mxnet import nd, autograd
   
   ctx = mx.cpu()
   
   a = mx.nd.array([1, 2, 3, 4], ctx=ctx)
   a.attach_grad()
   
   with autograd.record():
   b = nd.sum(2 * (a[0:4] ** 2))
   
   b.backward()
   print(a.grad)
   ```
   
   Why does it fail when I retain the graph by using `autograd.grad` and then 
re-use this graph again directly via `.bind`? And such re-use fails *only* when 
the slicing is used. I deliberately made slicing `a[0:4]` because it doesn't 
change the shape of the expression and technically should be equal to just `a`. 
Without slicing both `autograd.grad` and `.bind` do their jobs perfectly well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tdomhan closed issue #8321: Custom operators with distributed kvstore

2017-10-18 Thread GitBox
tdomhan closed issue #8321: Custom operators with distributed kvstore
URL: https://github.com/apache/incubator-mxnet/issues/8321
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tdomhan commented on issue #8321: Custom operators with distributed kvstore

2017-10-18 Thread GitBox
tdomhan commented on issue #8321: Custom operators with distributed kvstore
URL: 
https://github.com/apache/incubator-mxnet/issues/8321#issuecomment-337566261
 
 
   I can confirm that this fixes the issue. Thanks a lot! While testing I ran 
into a python 3 compatibility issue. Here's a fix for that:
   https://github.com/apache/incubator-mxnet/pull/8334 
   
   I'll close the issue now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang opened a new issue #8338: master cannot build on my machine

2017-10-18 Thread GitBox
ZiyueHuang opened a new issue #8338: master cannot build on my machine
URL: https://github.com/apache/incubator-mxnet/issues/8338
 
 
   Note: Providing complete information in the most concise form is the best 
way to get help. This issue template serves as the checklist for essential 
information to most of the technical issues.
   
   If the issue is non-technical, feel free to present the information in what 
you believe is the best form.
   
   ## Description
   master branch cannot build on my machine after commit 
9d3ce81ac23dae15e538242e02d8a2b2e76f5b1f . Before that commit, it can build 
correctly on my machine.
   
   ## Environment info (Required)
   
   ```
   --System Info--
   ('Platform :', 
'Linux-3.10.0-514.10.2.el7.x86_64-x86_64-with-centos-7.3.1611-Core')
   ('system   :', 'Linux')
   ('node :', 'model-gpu00')
   ('release  :', '3.10.0-514.10.2.el7.x86_64')
   ('version  :', '#1 SMP Fri Mar 3 00:04:05 UTC 2017')
   --Hardware Info--
   ('machine  :', 'x86_64')
   ('processor:', 'x86_64')
   Architecture:  x86_64
   CPU op-mode(s):32-bit, 64-bit
   Byte Order:Little Endian
   CPU(s):40
   On-line CPU(s) list:   0-39
   Thread(s) per core:2
   Core(s) per socket:10
   Socket(s): 2
   NUMA node(s):  2
   Vendor ID: GenuineIntel
   CPU family:6
   Model: 79
   Model name:Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
   Stepping:  1
   CPU MHz:   1264.484
   BogoMIPS:  4417.67
   Virtualization:VT-x
   L1d cache: 32K
   L1i cache: 32K
   L2 cache:  256K
   L3 cache:  25600K
   NUMA node0 CPU(s): 0-9,20-29
   NUMA node1 CPU(s): 10-19,30-39
   
   ```
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio): gcc
   
   ```
   gcc --version
   gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11)
   Copyright (C) 2015 Free Software Foundation, Inc.
   This is free software; see the source for copying conditions.  There is NO
   warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
   ```
   ```
   nvcc --version
   nvcc: NVIDIA (R) Cuda compiler driver
   Copyright (c) 2005-2016 NVIDIA Corporation
   Built on Tue_Jan_10_13:22:03_CST_2017
   Cuda compilation tools, release 8.0, V8.0.61
   ```
   MXNet commit hash:
   
   f9e25b47c0faec07e940fedb718df2f38dd925b4
   
   Build config:
   ```
   make -j USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/home/hanfeng/local/cuda 
USE_OPENCV=0
   ```
   
   ## Error Message:
   
   ```
   src/operator/tensor/./../mshadow_op.h(1073): warning: floating-point value 
does not fit in required integral type
 detected during:
   
   ```
   ```
   /home/hanfeng/zyh/mxnet/mshadow/mshadow/././././cuda/tensor_gpu-inl.cuh(75): 
error: expression preceding parentheses of apparent call must have 
(pointer-to-) function type
   
   /home/hanfeng/zyh/mxnet/mshadow/mshadow/././././cuda/tensor_gpu-inl.cuh(75): 
error: expression preceding parentheses of apparent call must have 
(pointer-to-) function type
   
   2 errors detected in the compilation of 
"/tmp/tmpxft_3079_-24_activation.compute_30.cpp2.i".
   make: *** [build/src/operator/activation_gpu.o] Error 2
   ```
   ## Minimum reproducible example
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   1.
   2.
   
   ## What have you tried to solve it?
   
   1.
   2.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kpot commented on issue #621: Support for other Device Types, OpenCL AMD GPU

2017-10-18 Thread GitBox
kpot commented on issue #621: Support for other Device Types, OpenCL AMD GPU
URL: https://github.com/apache/incubator-mxnet/issues/621#issuecomment-337656791
 
 
   Hi all,
   
   Guys, can anyone explain why mxnet still doesn't support OpenCL out of the 
box, even though it is based on nnvm now and through it on tvm, and should be 
able to perform all necessary computations using OpenCL devices? I checked on 
nnvm recently and it looks fully up to the task.
   But even in the upcoming mxnet 0.12, context `mxnet.gpu()` still means only 
"CUDA" and has no associations with `tvm.gpu()` or `tvm.cl()`. Why?
   
   Perhaps more than 30% of consumer GPUs around are AMD/Intel-made devices 
supporting OpenCL >= 1.2. Very often it's a great, inexpensive and less 
restricted hardware, and making it available for training would greatly benefit 
ML community.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ptrendx commented on issue #8336: Building with libjpeg-turbo error

2017-10-18 Thread GitBox
ptrendx commented on issue #8336: Building with libjpeg-turbo error
URL: 
https://github.com/apache/incubator-mxnet/issues/8336#issuecomment-337658368
 
 
   Yes, unfortunately libjpeg-turbo version available in Ubuntu package 
compiles turbojpeg only as a static library (turbojpeg.a) instead of dynamic 
library (turbojpeg.so). You need to compile libjpeg-turbo from source. For 
system-wide installation you can use this:
   ```
   apt-get install autoconf automake libtool nasm
   JPEG_TURBO_VERSION=1.5.2 && \
   wget -q -O - 
https://github.com/libjpeg-turbo/libjpeg-turbo/archive/${JPEG_TURBO_VERSION}.tar.gz
 | tar -xzf - && \
   cd libjpeg-turbo-${JPEG_TURBO_VERSION} && \
   autoreconf -fiv && \
   ./configure --enable-shared --prefix=/usr 2>&1 >/dev/null && \
   make -j"$(nproc)" install 2>&1 >/dev/null && \
   rm -rf libjpeg-turbo-${JPEG_TURBO_VERSION}
   ```
   Then you need to set `USE_LIBJPEG_TURBO_PATH` in `config.mk` to `/usr`



This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v0.12.0 updated: Fix unused type warning (#8316)

2017-10-18 Thread cjolivier01
This is an automated email from the ASF dual-hosted git repository.

cjolivier01 pushed a commit to branch v0.12.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v0.12.0 by this push:
 new 4f2af2d  Fix unused type warning (#8316)
4f2af2d is described below

commit 4f2af2d2e5216ab3a1faadcc117709b6836029dc
Author: Chris Olivier 
AuthorDate: Wed Oct 18 08:15:09 2017 -0700

Fix unused type warning (#8316)
---
 src/operator/tensor/init_op.h | 8 +++-
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/src/operator/tensor/init_op.h b/src/operator/tensor/init_op.h
index e08a682..97bda90 100644
--- a/src/operator/tensor/init_op.h
+++ b/src/operator/tensor/init_op.h
@@ -325,11 +325,9 @@ inline bool RangeShape(const nnvm::NodeAttrs& attrs,
   << "Invalid range (start, stop, step)= "
   << "(" << param.start << "," << param.stop.value() << "," << param.step 
<< ")";
   }
-  MSHADOW_TYPE_SWITCH(param.dtype, DType, {
-double out_size = std::ceil((param.stop.value() - param.start) / 
param.step)
-  * param.repeat;
-SHAPE_ASSIGN_CHECK(*out_attrs, 0, 
TShape({static_cast(out_size)}));
-  });
+  const double out_size = std::ceil((param.stop.value() - param.start) / 
param.step)
+  * param.repeat;
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0, 
TShape({static_cast(out_size)}));
   return true;
 }
 

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[incubator-mxnet] branch v0.12.0 updated (d84da0a -> cab51bf)

2017-10-18 Thread cjolivier01
This is an automated email from the ASF dual-hosted git repository.

cjolivier01 pushed a change to branch v0.12.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d84da0a  fixed broken links. https was pointing to http for mxnet.io 
(#8300)
 new 6ba1cde  Update rnn.md (#8320)
 new 1135f52  fluent methods for missed ops (#8329)
 new cab51bf  update ps lite (#8327)

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/api/python/gluon/rnn.md  |  2 +-
 docs/api/python/ndarray/ndarray.md| 37 +-
 docs/api/python/symbol/symbol.md  | 38 --
 ps-lite   |  2 +-
 python/mxnet/ndarray/ndarray.py   | 72 +++
 python/mxnet/symbol/symbol.py | 72 +++
 tests/python/unittest/test_ndarray.py | 10 +++--
 tests/python/unittest/test_symbol.py  |  9 +++--
 8 files changed, 230 insertions(+), 12 deletions(-)

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[incubator-mxnet] 03/03: update ps lite (#8327)

2017-10-18 Thread cjolivier01
This is an automated email from the ASF dual-hosted git repository.

cjolivier01 pushed a commit to branch v0.12.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit cab51bf4a7c392a5e2bdf2a6b6d755a50cbb8dee
Author: Eric Junyuan Xie 
AuthorDate: Wed Oct 18 01:49:04 2017 -0700

update ps lite (#8327)
---
 ps-lite | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ps-lite b/ps-lite
index acdb698..bdd4c67 16
--- a/ps-lite
+++ b/ps-lite
@@ -1 +1 @@
-Subproject commit acdb698fa3bb80929ef83bb37c705f025e119b82
+Subproject commit bdd4c67e9e34dc0b8350ce306b0caa737eb31c83

-- 
To stop receiving notification emails like this one, please contact
"comm...@mxnet.apache.org" .


[GitHub] yimyom commented on issue #5326: Train-accuracy=0.000000

2017-10-18 Thread GitBox
yimyom commented on issue #5326: Train-accuracy=0.00
URL: 
https://github.com/apache/incubator-mxnet/issues/5326#issuecomment-337615967
 
 
   any hope to have the solution to this problem ?
   even by using other datasets/learning rate/etc... I keep having exactly a 
train-accuracy = 0


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] 02/03: fluent methods for missed ops (#8329)

2017-10-18 Thread cjolivier01
This is an automated email from the ASF dual-hosted git repository.

cjolivier01 pushed a commit to branch v0.12.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 1135f523f6e0bd9955703bf85ca8ea866e2e8f5f
Author: Sheng Zha 
AuthorDate: Tue Oct 17 22:57:20 2017 -0700

fluent methods for missed ops (#8329)
---
 docs/api/python/ndarray/ndarray.md| 37 +-
 docs/api/python/symbol/symbol.md  | 38 --
 python/mxnet/ndarray/ndarray.py   | 72 +++
 python/mxnet/symbol/symbol.py | 72 +++
 tests/python/unittest/test_ndarray.py | 10 +++--
 tests/python/unittest/test_symbol.py  |  9 +++--
 6 files changed, 228 insertions(+), 10 deletions(-)

diff --git a/docs/api/python/ndarray/ndarray.md 
b/docs/api/python/ndarray/ndarray.md
index 615b9dc..09564c2 100644
--- a/docs/api/python/ndarray/ndarray.md
+++ b/docs/api/python/ndarray/ndarray.md
@@ -125,6 +125,7 @@ The `ndarray` package provides several classes:
 
 NDArray.T
 NDArray.reshape
+NDArray.reshape_like
 NDArray.flatten
 NDArray.expand_dims
 NDArray.split
@@ -194,6 +195,7 @@ The `ndarray` package provides several classes:
 NDArray.topk
 NDArray.argmax
 NDArray.argmin
+NDArray.argmax_channel
 ```
 
 ### Arithmetic operations
@@ -266,7 +268,22 @@ The `ndarray` package provides several classes:
 
 NDArray.sqrt
 NDArray.rsqrt
+NDArray.cbrt
+NDArray.rcbrt
 NDArray.square
+NDArray.reciprocal
+```
+
+## Basic neural network functions
+
+```eval_rst
+.. autosummary::
+:nosignatures:
+
+NDArray.relu
+NDArray.sigmoid
+NDArray.softmax
+NDArray.log_softmax
 ```
 
 ### In-place arithmetic operations
@@ -358,6 +375,7 @@ The `ndarray` package provides several classes:
 
 cast
 reshape
+reshape_like
 flatten
 expand_dims
 ```
@@ -394,6 +412,7 @@ The `ndarray` package provides several classes:
 
 concat
 split
+stack
 ```
 
 ### Indexing routines
@@ -514,11 +533,13 @@ The `ndarray` package provides several classes:
 power
 sqrt
 rsqrt
+cbrt
+rcbrt
 square
 reciprocal
 ```
 
-### Logic functions
+### Comparison
 
 ```eval_rst
 .. autosummary::
@@ -559,6 +580,18 @@ The `ndarray` package provides several classes:
 argsort
 argmax
 argmin
+argmax_channel
+```
+
+### Sequence operation
+
+```eval_rst
+.. autosummary::
+:nosignatures:
+
+SequenceLast
+SequenceMask
+SequenceReverse
 ```
 
 ### Miscellaneous
@@ -592,6 +625,8 @@ The `ndarray` package provides several classes:
 SoftmaxOutput
 softmax
 log_softmax
+relu
+sigmoid
 ```
 
 ### More
diff --git a/docs/api/python/symbol/symbol.md b/docs/api/python/symbol/symbol.md
index 7570e18..e93976d 100644
--- a/docs/api/python/symbol/symbol.md
+++ b/docs/api/python/symbol/symbol.md
@@ -143,9 +143,23 @@ Composite multiple symbols into a new one by an operator.
 
 Symbol.sqrt
 Symbol.rsqrt
+Symbol.cbrt
+Symbol.rcbrt
 Symbol.square
 ```
 
+## Basic neural network functions
+
+```eval_rst
+.. autosummary::
+:nosignatures:
+
+Symbol.relu
+Symbol.sigmoid
+Symbol.softmax
+Symbol.log_softmax
+```
+
  Comparison operators
 
 ```eval_rst
@@ -178,6 +192,7 @@ Composite multiple symbols into a new one by an operator.
 
 Symbol.astype
 Symbol.reshape
+Symbol.reshape_like
 Symbol.flatten
 Symbol.expand_dims
 ```
@@ -246,6 +261,7 @@ Composite multiple symbols into a new one by an operator.
 Symbol.topk
 Symbol.argmax
 Symbol.argmin
+Symbol.argmax_channel
 ```
 
 ### Query information
@@ -355,6 +371,7 @@ Composite multiple symbols into a new one by an operator.
 
 cast
 reshape
+reshape_like
 flatten
 expand_dims
 ```
@@ -391,6 +408,7 @@ Composite multiple symbols into a new one by an operator.
 
 concat
 split
+stack
 ```
 
 ### Indexing routines
@@ -424,7 +442,6 @@ Composite multiple symbols into a new one by an operator.
 broadcast_div
 broadcast_mod
 negative
-reciprocal
 dot
 batch_dot
 add_n
@@ -492,7 +509,6 @@ Composite multiple symbols into a new one by an operator.
 trunc
 ```
 
-
 ### Exponents and logarithms
 
 ```eval_rst
@@ -519,9 +535,10 @@ Composite multiple symbols into a new one by an operator.
 cbrt
 rcbrt
 square
+reciprocal
 ```
 
-### Logic functions
+### Comparison
 
 ```eval_rst
 .. autosummary::
@@ -534,6 +551,7 @@ Composite multiple symbols into a new one by an operator.
 broadcast_lesser
 broadcast_lesser_equal
 ```
+
 ### Random sampling
 
 ```eval_rst
@@ -561,6 +579,18 @@ Composite multiple symbols into a new one by an operator.
 argsort
 argmax
 argmin
+argmax_channel
+```
+
+### Sequence operation
+
+```eval_rst
+.. autosummary::
+:nosignatures:
+
+SequenceLast
+SequenceMask

[incubator-mxnet] 01/03: Update rnn.md (#8320)

2017-10-18 Thread cjolivier01
This is an automated email from the ASF dual-hosted git repository.

cjolivier01 pushed a commit to branch v0.12.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 6ba1cde37bd7d17091737d5c4de79875896e1d0a
Author: Sheng Zha 
AuthorDate: Tue Oct 17 20:52:02 2017 -0700

Update rnn.md (#8320)
---
 docs/api/python/gluon/rnn.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/api/python/gluon/rnn.md b/docs/api/python/gluon/rnn.md
index 073314d..7a40c45 100644
--- a/docs/api/python/gluon/rnn.md
+++ b/docs/api/python/gluon/rnn.md
@@ -21,7 +21,7 @@ with model.name_scope():
 model.add(mx.gluon.rnn.LSTM(20))
 model.add(mx.gluon.nn.Dense(5, flatten=False))
 model.initialize()
-model(mx.nd.ones((2,3,5)))
+model(mx.nd.ones((2,3)))
 ```
 
 ```eval_rst

-- 
To stop receiving notification emails like this one, please contact
"comm...@mxnet.apache.org" .


[incubator-mxnet] branch master updated: Fix unused type warning (#8316)

2017-10-18 Thread cjolivier01
This is an automated email from the ASF dual-hosted git repository.

cjolivier01 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new f9e25b4  Fix unused type warning (#8316)
f9e25b4 is described below

commit f9e25b47c0faec07e940fedb718df2f38dd925b4
Author: Chris Olivier 
AuthorDate: Wed Oct 18 08:15:09 2017 -0700

Fix unused type warning (#8316)
---
 src/operator/tensor/init_op.h | 8 +++-
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/src/operator/tensor/init_op.h b/src/operator/tensor/init_op.h
index e08a682..97bda90 100644
--- a/src/operator/tensor/init_op.h
+++ b/src/operator/tensor/init_op.h
@@ -325,11 +325,9 @@ inline bool RangeShape(const nnvm::NodeAttrs& attrs,
   << "Invalid range (start, stop, step)= "
   << "(" << param.start << "," << param.stop.value() << "," << param.step 
<< ")";
   }
-  MSHADOW_TYPE_SWITCH(param.dtype, DType, {
-double out_size = std::ceil((param.stop.value() - param.start) / 
param.step)
-  * param.repeat;
-SHAPE_ASSIGN_CHECK(*out_attrs, 0, 
TShape({static_cast(out_size)}));
-  });
+  const double out_size = std::ceil((param.stop.value() - param.start) / 
param.step)
+  * param.repeat;
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0, 
TShape({static_cast(out_size)}));
   return true;
 }
 

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[GitHub] cjolivier01 closed pull request #8316: Fix unused type warning

2017-10-18 Thread GitBox
cjolivier01 closed pull request #8316: Fix unused type warning
URL: https://github.com/apache/incubator-mxnet/pull/8316
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/tensor/init_op.h b/src/operator/tensor/init_op.h
index e08a682d94..97bda906c6 100644
--- a/src/operator/tensor/init_op.h
+++ b/src/operator/tensor/init_op.h
@@ -325,11 +325,9 @@ inline bool RangeShape(const nnvm::NodeAttrs& attrs,
   << "Invalid range (start, stop, step)= "
   << "(" << param.start << "," << param.stop.value() << "," << param.step 
<< ")";
   }
-  MSHADOW_TYPE_SWITCH(param.dtype, DType, {
-double out_size = std::ceil((param.stop.value() - param.start) / 
param.step)
-  * param.repeat;
-SHAPE_ASSIGN_CHECK(*out_attrs, 0, 
TShape({static_cast(out_size)}));
-  });
+  const double out_size = std::ceil((param.stop.value() - param.start) / 
param.step)
+  * param.repeat;
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0, 
TShape({static_cast(out_size)}));
   return true;
 }
 


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vinig commented on issue #8231: [MXNet 0.11.0 + RPi 3 + Python 2.7] ndarray unit test fails

2017-10-18 Thread GitBox
vinig commented on issue #8231: [MXNet 0.11.0 + RPi 3 + Python 2.7] ndarray 
unit test fails
URL: 
https://github.com/apache/incubator-mxnet/issues/8231#issuecomment-337706524
 
 
   #8292 @larroy thanks for looking into this


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >