[GitHub] cloudhan commented on issue #3724: asnumpy() of NDArray @cpu halted

2017-08-17 Thread git
cloudhan commented on issue #3724: asnumpy() of NDArray @cpu halted URL: https://github.com/apache/incubator-mxnet/issues/3724#issuecomment-322980863 @maxenceliu nah, I lost the script and forgot how to reproduce... This

[GitHub] jinfagang opened a new issue #7509: SSD example error

2017-08-17 Thread git
jinfagang opened a new issue #7509: SSD example error URL: https://github.com/apache/incubator-mxnet/issues/7509 Simply run SSD example got this error: ``` [01:09:24] src/nnvm/legacy_json_util.cc:185: Warning: loading symbol saved by MXNet version 1001 with lower version of MXNet

[GitHub] CNevd commented on a change in pull request #7082: Sparse Tensor: request for reviews

2017-08-17 Thread git
CNevd commented on a change in pull request #7082: Sparse Tensor: request for reviews URL: https://github.com/apache/incubator-mxnet/pull/7082#discussion_r133667882 ## File path: python/mxnet/model.py ## @@ -113,25 +127,36 @@ def _update_params_on_kvstore(param_arrays,

[GitHub] jinfagang commented on issue #7509: SSD example error

2017-08-17 Thread git
jinfagang commented on issue #7509: SSD example error URL: https://github.com/apache/incubator-mxnet/issues/7509#issuecomment-323004047 Well, I know it's version issue, but I just installed from pip which gives me 0.10.0, the newest version is 0.11, but why not update pip? I really cannot

[GitHub] ShownX commented on issue #7501: [Feature Request] save symbol as well in gluon

2017-08-17 Thread git
ShownX commented on issue #7501: [Feature Request] save symbol as well in gluon URL: https://github.com/apache/incubator-mxnet/issues/7501#issuecomment-323191837 Yes. I tried ```.save_params()``` function, it only saves the params. I cannot find the symbol file :(. I checked all

[GitHub] DickJC123 commented on a change in pull request #7505: Changed FullyConnected to use new linalg gemm, plus TensorCore if fp16 I/O.

2017-08-17 Thread git
DickJC123 commented on a change in pull request #7505: Changed FullyConnected to use new linalg gemm, plus TensorCore if fp16 I/O. URL: https://github.com/apache/incubator-mxnet/pull/7505#discussion_r133845489 ## File path: src/common/cuda_utils.h ## @@ -304,11 +304,32 @@

[GitHub] aksnzhy commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
aksnzhy commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323221350 It is actually compressed. The float data is just a holder, in which 16 float will be compressed into one float data

[GitHub] aksnzhy commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
aksnzhy commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323221350 It is actually compressed. The float data is just a hold, in which 16 float will be compressed into one float data

[GitHub] aksnzhy commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
aksnzhy commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323224813 Example: Compress: `>>> arr = np.random.randint(-10,10,(5, 16)) >>> array = mx.nd.array(arr) >>> array [[ 5. -1.

[GitHub] aksnzhy commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
aksnzhy commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323224813 Example: Compress: ` arr = np.random.randint(-10,10,(5, 16)) array = mx.nd.array(arr) array [[ 5. -1. -7. 0.

[GitHub] szha commented on issue #7506: build error: expected ?}? at end of input

2017-08-17 Thread git
szha commented on issue #7506: build error: expected ?}? at end of input URL: https://github.com/apache/incubator-mxnet/issues/7506#issuecomment-323190662 Make sure that the submodules are updated by using `git submodule update --init --recursive`

[GitHub] szha commented on issue #7501: [Feature Request] save symbol as well in gluon

2017-08-17 Thread git
szha commented on issue #7501: [Feature Request] save symbol as well in gluon URL: https://github.com/apache/incubator-mxnet/issues/7501#issuecomment-323190864 you can call the `.save_params()` function on it. This is an

[GitHub] ShownX commented on issue #7501: [Feature Request] save symbol as well in gluon

2017-08-17 Thread git
ShownX commented on issue #7501: [Feature Request] save symbol as well in gluon URL: https://github.com/apache/incubator-mxnet/issues/7501#issuecomment-323190457 how to serialize hybridblock? This is an automated message

[GitHub] piiswrong commented on a change in pull request #7505: Changed FullyConnected to use new linalg gemm, plus TensorCore if fp16 I/O.

2017-08-17 Thread git
piiswrong commented on a change in pull request #7505: Changed FullyConnected to use new linalg gemm, plus TensorCore if fp16 I/O. URL: https://github.com/apache/incubator-mxnet/pull/7505#discussion_r133838448 ## File path: src/common/cuda_utils.h ## @@ -304,11 +304,32 @@

[GitHub] DickJC123 commented on issue #7505: Changed FullyConnected to use new linalg gemm, plus TensorCore if fp16 I/O.

2017-08-17 Thread git
DickJC123 commented on issue #7505: Changed FullyConnected to use new linalg gemm, plus TensorCore if fp16 I/O. URL: https://github.com/apache/incubator-mxnet/pull/7505#issuecomment-323216307 Besides responding to the comments, I've removed the assign_linalg_gemm() function and

[GitHub] piiswrong commented on issue #7447: Tensorcore fullyconnected support2

2017-08-17 Thread git
piiswrong commented on issue #7447: Tensorcore fullyconnected support2 URL: https://github.com/apache/incubator-mxnet/pull/7447#issuecomment-323186684 should this be closed? This is an automated message from the Apache Git

[GitHub] ShownX commented on issue #7501: [Feature Request] save symbol as well in gluon

2017-08-17 Thread git
ShownX commented on issue #7501: [Feature Request] save symbol as well in gluon URL: https://github.com/apache/incubator-mxnet/issues/7501#issuecomment-323191837 Yes. I tried ```.save_params()``` function, it only saves the params. I cannot find the symbol file :(.

[GitHub] piiswrong commented on issue #7505: Changed FullyConnected to use new linalg gemm, plus TensorCore if fp16 I/O.

2017-08-17 Thread git
piiswrong commented on issue #7505: Changed FullyConnected to use new linalg gemm, plus TensorCore if fp16 I/O. URL: https://github.com/apache/incubator-mxnet/pull/7505#issuecomment-323216558 linalg_gemm with req (instead of alpha/beta) is good. But we don't need the Transposed mechanism

[GitHub] piiswrong commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
piiswrong commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323221901 We may want to consider adding a data type like int2 This is an automated

[GitHub] aksnzhy commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
aksnzhy commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323221804 For now, this method can only support to compress the float data. This is an

[GitHub] piiswrong commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
piiswrong commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323224581 How are gradients from different gpus aggregated? This is an automated

[GitHub] szha commented on issue #7501: [Feature Request] save symbol as well in gluon

2017-08-17 Thread git
szha commented on issue #7501: [Feature Request] save symbol as well in gluon URL: https://github.com/apache/incubator-mxnet/issues/7501#issuecomment-323200126 @ShownX ah sorry for missing that. To get the symbol out of a block, just feed the block with a symbol of the intended put

[GitHub] aksnzhy commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
aksnzhy commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323223243 For multi-GPU, we can directly invoke quantize_bit() before copying data to other gpu, and invoke dequantize_2bit() when receiving the data

[GitHub] tqchen commented on issue #621: Support for other Device Types, OpenCL AMD GPU

2017-08-17 Thread git
tqchen commented on issue #621: Support for other Device Types, OpenCL AMD GPU URL: https://github.com/apache/incubator-mxnet/issues/621#issuecomment-323223409 Would like to update on this this can now be done via https://github.com/dmlc/tvm

[GitHub] DickJC123 commented on issue #7505: Changed FullyConnected to use new linalg gemm, plus TensorCore if fp16 I/O.

2017-08-17 Thread git
DickJC123 commented on issue #7505: Changed FullyConnected to use new linalg gemm, plus TensorCore if fp16 I/O. URL: https://github.com/apache/incubator-mxnet/pull/7505#issuecomment-323216772 For the example I showed, how about: linalg_gemm(data, wmat, out, false, true, s);

[GitHub] aksnzhy opened a new pull request #7512: add two bit compression operator

2017-08-17 Thread git
aksnzhy opened a new pull request #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512 This is the implementation of two-bit compression. This is an automated message from the

[GitHub] piiswrong commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
piiswrong commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323220828 Is the output actually compressed? Looks like it's still float? This is an

[GitHub] piiswrong commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
piiswrong commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-32322 How is this going to be used by kvstore and ps-lite? How does it recognize a compressed array?

[GitHub] aksnzhy commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
aksnzhy commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323223754 The compressed array is also a NDArray, the first two element is two threshold, the other elements is the compressed data. Every 16 data will

[GitHub] aksnzhy commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
aksnzhy commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323223243 For multi-GPU, we can directly invoke quantize_2bit() before copying data to other gpu, and invoke dequantize_2bit() when receiving the data

[GitHub] aksnzhy commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
aksnzhy commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323224813 Example: Compress: arr = np.random.randint(-10,10,(5, 16)) array = mx.nd.array(arr) array [[ 5. -1. -7. 0. -7.

[GitHub] piiswrong commented on issue #7505: Changed FullyConnected to use new linalg gemm, plus TensorCore if fp16 I/O.

2017-08-17 Thread git
piiswrong commented on issue #7505: Changed FullyConnected to use new linalg gemm, plus TensorCore if fp16 I/O. URL: https://github.com/apache/incubator-mxnet/pull/7505#issuecomment-323254967 what happened to the changes concerning dev_id?

[GitHub] szha commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC

2017-08-17 Thread git
szha commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133883907 ## File path: tests/python/unittest/test_loss.py ## @@ -165,6 +165,36 @@ def

[GitHub] piiswrong commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC

2017-08-17 Thread git
piiswrong commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133883738 ## File path: src/operator/contrib/ctc_loss-inl.h ## @@ -221,12 +315,139 @@ class

[GitHub] adamcrussell commented on issue #7499: how to correctly have multiple inputs with Perl API?

2017-08-17 Thread git
adamcrussell commented on issue #7499: how to correctly have multiple inputs with Perl API? URL: https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323244015 @sergeykolychev a very minor thing I noted was that on line 136 of

[incubator-mxnet] branch master updated (462dee7 -> 56eae58)

2017-08-17 Thread jxie
This is an automated email from the ASF dual-hosted git repository. jxie pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 462dee7 Fix description of argument parser (#7507) add 56eae58 Fixed Makefile so a null CUDA_ARCH is

[GitHub] piiswrong closed pull request #7515: Fixed Makefile so a null CUDA_ARCH is treated like an unset one.

2017-08-17 Thread git
piiswrong closed pull request #7515: Fixed Makefile so a null CUDA_ARCH is treated like an unset one. URL: https://github.com/apache/incubator-mxnet/pull/7515 This is an automated message from the Apache Git Service. To

[GitHub] aksnzhy commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
aksnzhy commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323225418 Before aggregate the gradient, we first need to decompress them into a new array with same size of the original gradient array, but the value

[GitHub] rahul003 commented on issue #7321: fixes broken compilation by including tensor_blob

2017-08-17 Thread git
rahul003 commented on issue #7321: fixes broken compilation by including tensor_blob URL: https://github.com/apache/incubator-mxnet/pull/7321#issuecomment-323225231 This change has now been introduced through https://github.com/apache/incubator-mxnet/pull/7147

[GitHub] aksnzhy commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
aksnzhy commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323224813 Example: Compress: arr = np.random.randint(-10,10,(5, 16)) array = mx.nd.array(arr) array [[ 5. -1. -7. 0. -7.

[GitHub] kevinthesun opened a new pull request #12: Modify pip installation

2017-08-17 Thread git
kevinthesun opened a new pull request #12: Modify pip installation URL: https://github.com/apache/incubator-mxnet-site/pull/12 This is an automated message from the Apache Git Service. To respond to the message, please log

[GitHub] sergeykolychev commented on issue #7499: how to correctly have multiple inputs with Perl API?

2017-08-17 Thread git
sergeykolychev commented on issue #7499: how to correctly have multiple inputs with Perl API? URL: https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323251187 @adamcrussell Thank you! Feel free to create a pull :-) We need more perl devs for mxnet!

[GitHub] pjpan commented on issue #2754: import mxnetReason: image not found

2017-08-17 Thread git
pjpan commented on issue #2754: import mxnetReason: image not found URL: https://github.com/apache/incubator-mxnet/issues/2754#issuecomment-323257968 @ForoughA no. This is an automated message from the Apache Git

[GitHub] piiswrong commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC

2017-08-17 Thread git
piiswrong commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133884680 ## File path: src/operator/contrib/ctc_loss-inl.h ## @@ -125,43 +130,112 @@

[GitHub] piiswrong commented on issue #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC

2017-08-17 Thread git
piiswrong commented on issue #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC URL: https://github.com/apache/incubator-mxnet/pull/7442#issuecomment-323266679 Do you have performance comp between baidu and cudnn?

[GitHub] aksnzhy commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
aksnzhy commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323226157 The residual is stored in each local device. This is an automated message

[GitHub] aksnzhy commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
aksnzhy commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323226023 Yes, before compressing we need to add the residual, and we also need to calculate new residual after compressing. I think it will be better

[GitHub] aksnzhy commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
aksnzhy commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323226023 Yes, before compressing we need to add the residual, and we need to calculate new residual after compressing. I think it will be better to let

[GitHub] sxjscience commented on issue #7510: mx.sym.argsort() cannot sort array with large tensor

2017-08-17 Thread git
sxjscience commented on issue #7510: mx.sym.argsort() cannot sort array with large tensor URL: https://github.com/apache/incubator-mxnet/issues/7510#issuecomment-323233860 I've switch to the cub to have a try. This is an

[GitHub] piiswrong commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
piiswrong commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323258926 You can calculate compression and residual in one operator to make it faster

[GitHub] piiswrong commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
piiswrong commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323225535 quantize also need to output the residual right? This is an automated

[GitHub] dhaners opened a new issue #7513: How to view results from recommender examples?

2017-08-17 Thread git
dhaners opened a new issue #7513: How to view results from recommender examples? URL: https://github.com/apache/incubator-mxnet/issues/7513 I am new to MXNet and recommender engines and wanted to test out the matrix factorization demos provided with MXNet. I can run the python notebooks

[GitHub] kevinthesun opened a new pull request #7514: Modify pip install to specific tag

2017-08-17 Thread git
kevinthesun opened a new pull request #7514: Modify pip install to specific tag URL: https://github.com/apache/incubator-mxnet/pull/7514 This is an automated message from the Apache Git Service. To respond to the message,

[GitHub] StatML commented on issue #7506: build error: expected ?}? at end of input

2017-08-17 Thread git
StatML commented on issue #7506: build error: expected ?}? at end of input URL: https://github.com/apache/incubator-mxnet/issues/7506#issuecomment-323226769 Hi, @ptrendx @szha @piiswrong Thanks a lot for your replies! The problem had been solved. I just did `make clean` then `make -j`. So

[GitHub] StatML commented on issue #7506: build error: expected ?}? at end of input

2017-08-17 Thread git
StatML commented on issue #7506: build error: expected ?}? at end of input URL: https://github.com/apache/incubator-mxnet/issues/7506#issuecomment-323226769 Hi, @piiswrong @ptrendx @szha Thanks a lot for your replies! The problem had been solved. I just did `make clean` then `make -j`. So

[GitHub] piiswrong commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC

2017-08-17 Thread git
piiswrong commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133884497 ## File path: python/mxnet/gluon/loss.py ## @@ -295,3 +297,102 @@ def

[GitHub] rahul003 commented on issue #7321: fixes broken compilation by including tensor_blob

2017-08-17 Thread git
rahul003 commented on issue #7321: fixes broken compilation by including tensor_blob URL: https://github.com/apache/incubator-mxnet/pull/7321#issuecomment-323225231 The same change has now been introduced through https://github.com/apache/incubator-mxnet/pull/7147

[GitHub] aksnzhy commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
aksnzhy commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323226157 The residual is stored in each local device This is an automated message from

[GitHub] aksnzhy commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
aksnzhy commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323226023 Yes, before compressing we need to add the residual. I think it will be better to let user do this task. This operator is just a very low-level

[GitHub] sxjscience commented on issue #7510: mx.sym.argsort() cannot sort array with large tensor

2017-08-17 Thread git
sxjscience commented on issue #7510: mx.sym.argsort() cannot sort array with large tensor URL: https://github.com/apache/incubator-mxnet/issues/7510#issuecomment-323259894 OK, I've found the problem. It's this line

[GitHub] eric-haibin-lin commented on a change in pull request #7082: Sparse Tensor: request for reviews

2017-08-17 Thread git
eric-haibin-lin commented on a change in pull request #7082: Sparse Tensor: request for reviews URL: https://github.com/apache/incubator-mxnet/pull/7082#discussion_r133881538 ## File path: python/mxnet/model.py ## @@ -113,25 +127,36 @@ def

[GitHub] piiswrong commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC

2017-08-17 Thread git
piiswrong commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133883552 ## File path: tests/python/unittest/test_loss.py ## @@ -165,6 +165,36 @@ def

[GitHub] piiswrong commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC

2017-08-17 Thread git
piiswrong commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133884234 ## File path: src/operator/contrib/ctc_loss-inl.h ## @@ -125,43 +130,112 @@

[GitHub] jinfagang commented on issue #7509: SSD example error

2017-08-17 Thread git
jinfagang commented on issue #7509: SSD example error URL: https://github.com/apache/incubator-mxnet/issues/7509#issuecomment-323090316 Just follow the SSD example, download right from link given by that example, exception for some path issue, this model seems not compatiable with mxnet

[GitHub] sergeykolychev commented on issue #7511: Perl MNIST using record file format. Test not passed.

2017-08-17 Thread git
sergeykolychev commented on issue #7511: Perl MNIST using record file format. Test not passed. URL: https://github.com/apache/incubator-mxnet/issues/7511#issuecomment-323093556 @dokechin Thanks! I'll review it on weekend and fix bugs if any.

[GitHub] dokechin opened a new issue #7511: Perl MNIST using record file format. Test not passed.

2017-08-17 Thread git
dokechin opened a new issue #7511: Perl MNIST using record file format. Test not passed. URL: https://github.com/apache/incubator-mxnet/issues/7511 ## Environment info Operating System:Mac OS Package used (Python/R/Scala/Julia): Perl MXNet version: 0.10.0 ##

[GitHub] sandeep-krishnamurthy commented on issue #7509: SSD example error

2017-08-17 Thread git
sandeep-krishnamurthy commented on issue #7509: SSD example error URL: https://github.com/apache/incubator-mxnet/issues/7509#issuecomment-323119026 You can install 0.11-rc1 with below pip command: # for cpu pip install mxnet==0.11.0.rc1 #for gpu with cuda8 pip install

[GitHub] sergeykolychev commented on issue #7511: Perl MNIST using record file format. Test not passed.

2017-08-17 Thread git
sergeykolychev commented on issue #7511: Perl MNIST using record file format. Test not passed. URL: https://github.com/apache/incubator-mxnet/issues/7511#issuecomment-323120176 @dokechin When I change your code to use ImageIter (and make batch size 100 and as well force shuffle => 1)

[GitHub] szha commented on issue #7264: gluon conv rnns

2017-08-17 Thread git
szha commented on issue #7264: gluon conv rnns URL: https://github.com/apache/incubator-mxnet/pull/7264#issuecomment-323128808 @sxjscience @piiswrong this PR is ready for review and merge. This is an automated message from

[GitHub] aksnzhy commented on issue #7512: add two bit compression operator

2017-08-17 Thread git
aksnzhy commented on issue #7512: add two bit compression operator URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323225418 Before aggregating the gradient, we first need to decompress them into a new array with same size of the original gradient array, but the value

[GitHub] rahul003 commented on issue #7421: Resolve more compile warnings

2017-08-17 Thread git
rahul003 commented on issue #7421: Resolve more compile warnings URL: https://github.com/apache/incubator-mxnet/pull/7421#issuecomment-323232242 Sorry, didn't get you @tornadomeet . Are you saying we can also update the function declaration? Let me know if there's a better solution,

[GitHub] DickJC123 closed pull request #7447: Tensorcore fullyconnected support2

2017-08-17 Thread git
DickJC123 closed pull request #7447: Tensorcore fullyconnected support2 URL: https://github.com/apache/incubator-mxnet/pull/7447 This is an automated message from the Apache Git Service. To respond to the message, please

[GitHub] DickJC123 commented on issue #7447: Tensorcore fullyconnected support2

2017-08-17 Thread git
DickJC123 commented on issue #7447: Tensorcore fullyconnected support2 URL: https://github.com/apache/incubator-mxnet/pull/7447#issuecomment-323238326 This work superseded by a later PR. This is an automated message from the

[incubator-mxnet] branch master updated: Changed FullyConnected to use new linalg gemm, plus TensorCore if fp16 I/O. (#7505)

2017-08-17 Thread jxie
This is an automated email from the ASF dual-hosted git repository. jxie pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new ff21e1f Changed FullyConnected to use

[GitHub] StatML closed issue #7506: build error: expected ?}? at end of input

2017-08-17 Thread git
StatML closed issue #7506: build error: expected ?}? at end of input URL: https://github.com/apache/incubator-mxnet/issues/7506 This is an automated message from the Apache Git Service. To respond to the message, please log

[GitHub] StatML commented on issue #7506: build error: expected ?}? at end of input

2017-08-17 Thread git
StatML commented on issue #7506: build error: expected ?}? at end of input URL: https://github.com/apache/incubator-mxnet/issues/7506#issuecomment-323226769 Hi, @ptrendx @szha @piiswrong Thanks a lot for your replies! The problem is solved. I just did `make clean` then `make -j`. So

[GitHub] rahul003 commented on issue #7421: Resolve more compile warnings

2017-08-17 Thread git
rahul003 commented on issue #7421: Resolve more compile warnings URL: https://github.com/apache/incubator-mxnet/pull/7421#issuecomment-323233393 Ah right, didn't notice that it was a different file. Changing that as well.

[GitHub] sxjscience commented on issue #7510: mx.sym.argsort() cannot sort array with large tensor

2017-08-17 Thread git
sxjscience commented on issue #7510: mx.sym.argsort() cannot sort array with large tensor URL: https://github.com/apache/incubator-mxnet/issues/7510#issuecomment-323233399 I've confirmed. This is a bug. I'll look into what caused it. It may be related to the way I do batched sort.

[GitHub] tornadomeet commented on issue #7421: Resolve more compile warnings

2017-08-17 Thread git
tornadomeet commented on issue #7421: Resolve more compile warnings URL: https://github.com/apache/incubator-mxnet/pull/7421#issuecomment-323233263 yes, the function there is same and also be `int`, we'd better also fixed that.

[GitHub] piiswrong closed pull request #7514: Modify pip install to specific tag

2017-08-17 Thread git
piiswrong closed pull request #7514: Modify pip install to specific tag URL: https://github.com/apache/incubator-mxnet/pull/7514 This is an automated message from the Apache Git Service. To respond to the message, please

[incubator-mxnet] branch master updated: Modify pip install to specific tag (#7514)

2017-08-17 Thread jxie
This is an automated email from the ASF dual-hosted git repository. jxie pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new 6004e52 Modify pip install to specific

[GitHub] piiswrong commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC

2017-08-17 Thread git
piiswrong commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133883452 ## File path: src/operator/contrib/ctc_loss-inl.h ## @@ -240,15 +461,22 @@ class

[GitHub] chinakook opened a new issue #7517: Add Depthwise Deconvolution support?

2017-08-17 Thread git
chinakook opened a new issue #7517: Add Depthwise Deconvolution support? URL: https://github.com/apache/incubator-mxnet/issues/7517 @crazy-cat Hi cat. You have make the convolution faster when the num_filter==num_group(depthwise convolution). I have a question for that, the

[GitHub] CNevd commented on a change in pull request #7082: Sparse Tensor: request for reviews

2017-08-17 Thread git
CNevd commented on a change in pull request #7082: Sparse Tensor: request for reviews URL: https://github.com/apache/incubator-mxnet/pull/7082#discussion_r133883401 ## File path: python/mxnet/model.py ## @@ -113,25 +127,36 @@ def _update_params_on_kvstore(param_arrays,

[GitHub] piiswrong commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC

2017-08-17 Thread git
piiswrong commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133884362 ## File path: python/mxnet/gluon/loss.py ## @@ -295,3 +297,102 @@ def

[GitHub] piiswrong commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC

2017-08-17 Thread git
piiswrong commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133884355 ## File path: python/mxnet/gluon/loss.py ## @@ -295,3 +297,102 @@ def

[GitHub] piiswrong commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC

2017-08-17 Thread git
piiswrong commented on a change in pull request #7442: contrib ctc interface changes, cudnn7 CTC, and gluon CTC URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133884362 ## File path: python/mxnet/gluon/loss.py ## @@ -295,3 +297,102 @@ def

[GitHub] sergeykolychev commented on issue #7499: how to correctly have multiple inputs with Perl API?

2017-08-17 Thread git
sergeykolychev commented on issue #7499: how to correctly have multiple inputs with Perl API? URL: https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323165552 yes This is an automated message from the

[GitHub] sergeykolychev commented on issue #7499: how to correctly have multiple inputs with Perl API?

2017-08-17 Thread git
sergeykolychev commented on issue #7499: how to correctly have multiple inputs with Perl API? URL: https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323165552 yes, ->fit expects one iterator This is an

[GitHub] mbrookhart closed issue #7316: Is there any plan to refactor the legacy Ops into nnvm?

2017-08-17 Thread git
mbrookhart closed issue #7316: Is there any plan to refactor the legacy Ops into nnvm? URL: https://github.com/apache/incubator-mxnet/issues/7316 This is an automated message from the Apache Git Service. To respond to the

[GitHub] sergeykolychev commented on issue #7499: how to correctly have multiple inputs with Perl API?

2017-08-17 Thread git
sergeykolychev commented on issue #7499: how to correctly have multiple inputs with Perl API? URL: https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323167340 you can create one hot encoding in multiple ways. easiest one to have your iterator return an ndarray for that

[GitHub] piiswrong commented on issue #7506: build error: expected ?}? at end of input

2017-08-17 Thread git
piiswrong commented on issue #7506: build error: expected ?}? at end of input URL: https://github.com/apache/incubator-mxnet/issues/7506#issuecomment-323142098 @asmushetzel @ptrendx Any idea why cusolverStatus_t wouldn't be available?

[GitHub] adamcrussell commented on issue #7499: how to correctly have multiple inputs with Perl API?

2017-08-17 Thread git
adamcrussell commented on issue #7499: how to correctly have multiple inputs with Perl API? URL: https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323149062 @sergeykolychev great, that works as far as getting the module to accept the inputs. I'll hadn't written my own

[GitHub] adamcrussell commented on issue #7499: how to correctly have multiple inputs with Perl API?

2017-08-17 Thread git
adamcrussell commented on issue #7499: how to correctly have multiple inputs with Perl API? URL: https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323149062 @sergeykolychev great, that works as far as getting the module to accept the inputs. I hadn't written my own

[GitHub] sergeykolychev commented on issue #7511: Perl MNIST using record file format. Test not passed.

2017-08-17 Thread git
sergeykolychev commented on issue #7511: Perl MNIST using record file format. Test not passed. URL: https://github.com/apache/incubator-mxnet/issues/7511#issuecomment-323120176 @dokechin When I change your code to use ImageIter (and make batch size 100 and as well force shuffle => 1)

[GitHub] sergeykolychev commented on issue #7511: Perl MNIST using record file format. Test not passed.

2017-08-17 Thread git
sergeykolychev commented on issue #7511: Perl MNIST using record file format. Test not passed. URL: https://github.com/apache/incubator-mxnet/issues/7511#issuecomment-323149337 @dokechin There's seems to be some error at C++ level, the training.ctl file that you used to create the

[GitHub] piiswrong commented on issue #7510: mx.sym.argsort() cannot sort array with large tensor

2017-08-17 Thread git
piiswrong commented on issue #7510: mx.sym.argsort() cannot sort array with large tensor URL: https://github.com/apache/incubator-mxnet/issues/7510#issuecomment-323141732 @sxjscience @reminisce This is an automated

[GitHub] piiswrong commented on issue #7510: mx.sym.argsort() cannot sort array with large tensor

2017-08-17 Thread git
piiswrong commented on issue #7510: mx.sym.argsort() cannot sort array with large tensor URL: https://github.com/apache/incubator-mxnet/issues/7510#issuecomment-323141732 @sxjscience This is an automated message from the

[GitHub] piiswrong commented on issue #7501: [Feature Request] save symbol as well in gluon

2017-08-17 Thread git
piiswrong commented on issue #7501: [Feature Request] save symbol as well in gluon URL: https://github.com/apache/incubator-mxnet/issues/7501#issuecomment-323142304 Only hybridblock can be serialized to symbol. Normal blocks can only be pickled

[GitHub] piiswrong commented on a change in pull request #7505: Changed FullyConnected to use new linalg gemm, plus TensorCore if fp16 I/O.

2017-08-17 Thread git
piiswrong commented on a change in pull request #7505: Changed FullyConnected to use new linalg gemm, plus TensorCore if fp16 I/O. URL: https://github.com/apache/incubator-mxnet/pull/7505#discussion_r133774602 ## File path: src/operator/linalg_impl.h ## @@ -108,6 +112,55

  1   2   >