[GitHub] jiarenyf commented on issue #8315: There is a bug in metric.py

2017-10-19 Thread GitBox
jiarenyf commented on issue #8315: There is a bug in metric.py URL: https://github.com/apache/incubator-mxnet/issues/8315#issuecomment-338111435 By the way, I found that nearly no one answer issue questions for a long time ... Is this framework being given up ...

[GitHub] jiarenyf commented on issue #8315: There is a bug in metric.py

2017-10-19 Thread GitBox
jiarenyf commented on issue #8315: There is a bug in metric.py URL: https://github.com/apache/incubator-mxnet/issues/8315#issuecomment-33844 ... The `output_names` is None (default), or a list when you new a metric class with a arg called output_names ... It can not be a int ...

[GitHub] liuzhi136 commented on issue #8341: Training error always fluctuates and doesn't decrease.

2017-10-19 Thread GitBox
liuzhi136 commented on issue #8341: Training error always fluctuates and doesn't decrease. URL: https://github.com/apache/incubator-mxnet/issues/8341#issuecomment-338108545 @szha Do you have any idea about this? This is an

[GitHub] jiarenyf commented on issue #8347: CTC Example Problem

2017-10-19 Thread GitBox
jiarenyf commented on issue #8347: CTC Example Problem URL: https://github.com/apache/incubator-mxnet/issues/8347#issuecomment-338108475 ?? This is an automated message from the Apache Git Service. To respond to the message,

[GitHub] zheng-da commented on issue #8354: How to add NNVM operator with auxiliary states

2017-10-19 Thread GitBox
zheng-da commented on issue #8354: How to add NNVM operator with auxiliary states URL: https://github.com/apache/incubator-mxnet/issues/8354#issuecomment-338100362 You can use nnvm::FMutateInputs to specify auxiliary states. The link below shows an example.

[GitHub] ZiyueHuang commented on issue #8338: master branch cannot build on centos 7 with cuda-8.0

2017-10-19 Thread GitBox
ZiyueHuang commented on issue #8338: master branch cannot build on centos 7 with cuda-8.0 URL: https://github.com/apache/incubator-mxnet/issues/8338#issuecomment-338100092 @mseeger Yes, if I checkout to 9f97dac76e43b2ca0acb09a4ff96d416e9edea60, the one just before your commit. It can

[GitHub] shadowleaves opened a new issue #4045: inferred shape error with FullyConnected layer

2017-10-19 Thread GitBox
shadowleaves opened a new issue #4045: inferred shape error with FullyConnected layer URL: https://github.com/apache/incubator-mxnet/issues/4045 very simple program that starts with two input nodes (batch_size=100, n_inputs=2) and then maps it to a layer of 3 hidden nodes via a 2x3 weight

[GitHub] yuewu001 opened a new issue #8354: How to add NNVM operator with auxiliary states

2017-10-19 Thread GitBox
yuewu001 opened a new issue #8354: How to add NNVM operator with auxiliary states URL: https://github.com/apache/incubator-mxnet/issues/8354 When adding new operators with auxiliary states, how to set the attributes with NNVM_REGISTER_OP ?

[GitHub] chinakook commented on issue #8335: Performance of MXNet on Windows is lower than that on Linux by 15%-20%

2017-10-19 Thread GitBox
chinakook commented on issue #8335: Performance of MXNet on Windows is lower than that on Linux by 15%-20% URL: https://github.com/apache/incubator-mxnet/issues/8335#issuecomment-338080017 But the cpu performance of Windows is also lower. Our customers use Windows so I cannot give up.

[GitHub] rahul003 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
rahul003 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145859932 ## File path: src/kvstore/comm.h ## @@ -79,8 +79,35 @@ class Comm { return pinned_ctx_; } + /**

[GitHub] cjolivier01 commented on a change in pull request #8340: Fill optimizations

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8340: Fill optimizations URL: https://github.com/apache/incubator-mxnet/pull/8340#discussion_r145845639 ## File path: src/operator/tensor/init_op.h ## @@ -164,19 +164,38 @@ inline bool InitStorageType(const

[GitHub] cjolivier01 commented on a change in pull request #8340: Fill optimizations

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8340: Fill optimizations URL: https://github.com/apache/incubator-mxnet/pull/8340#discussion_r145845639 ## File path: src/operator/tensor/init_op.h ## @@ -164,19 +164,38 @@ inline bool InitStorageType(const

[GitHub] nickeleres commented on issue #8350: Incorrect implied shape inside loss function

2017-10-19 Thread GitBox
nickeleres commented on issue #8350: Incorrect implied shape inside loss function URL: https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338058033 I had to set `sparse_label=False` in my loss function, which now looks like:

[GitHub] cjolivier01 commented on a change in pull request #8340: Fill optimizations

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8340: Fill optimizations URL: https://github.com/apache/incubator-mxnet/pull/8340#discussion_r145845639 ## File path: src/operator/tensor/init_op.h ## @@ -164,19 +164,38 @@ inline bool InitStorageType(const

[GitHub] cjolivier01 commented on a change in pull request #8340: Fill optimizations

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8340: Fill optimizations URL: https://github.com/apache/incubator-mxnet/pull/8340#discussion_r145845639 ## File path: src/operator/tensor/init_op.h ## @@ -164,19 +164,38 @@ inline bool InitStorageType(const

[GitHub] cjolivier01 commented on a change in pull request #8340: Fill optimizations

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8340: Fill optimizations URL: https://github.com/apache/incubator-mxnet/pull/8340#discussion_r145845639 ## File path: src/operator/tensor/init_op.h ## @@ -164,19 +164,38 @@ inline bool InitStorageType(const

[GitHub] louisfeng commented on a change in pull request #7931: MKL-DNN integration: request for reviews

2017-10-19 Thread GitBox
louisfeng commented on a change in pull request #7931: MKL-DNN integration: request for reviews URL: https://github.com/apache/incubator-mxnet/pull/7931#discussion_r145845602 ## File path: src/operator/mkl/mkldnn_elemwise_sum-inl.h ## @@ -0,0 +1,233 @@

[GitHub] nickeleres commented on issue #8350: Incorrect implied shape inside loss function

2017-10-19 Thread GitBox
nickeleres commented on issue #8350: Incorrect implied shape inside loss function URL: https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338061064 Awesome, thank you so much This is an automated message

[GitHub] zhreshold commented on issue #8350: Incorrect implied shape inside loss function

2017-10-19 Thread GitBox
zhreshold commented on issue #8350: Incorrect implied shape inside loss function URL: https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338060914 gluon.loss.SoftmaxCrossEntropyLoss(sparse_label=False) This

[GitHub] nickeleres commented on issue #8350: Incorrect implied shape inside loss function

2017-10-19 Thread GitBox
nickeleres commented on issue #8350: Incorrect implied shape inside loss function URL: https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338060702 Ok. So what is the explicit correct loss function for one-hot labels?

[GitHub] louisfeng commented on a change in pull request #7931: MKL-DNN integration: request for reviews

2017-10-19 Thread GitBox
louisfeng commented on a change in pull request #7931: MKL-DNN integration: request for reviews URL: https://github.com/apache/incubator-mxnet/pull/7931#discussion_r145844506 ## File path: src/operator/mkl/mkldnn_elemwise_sum-inl.h ## @@ -0,0 +1,233 @@

[GitHub] zhreshold commented on issue #8350: Incorrect implied shape inside loss function

2017-10-19 Thread GitBox
zhreshold commented on issue #8350: Incorrect implied shape inside loss function URL: https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338060442 @nickeleres Just to mention that what I meant is sparse_label=False, from_logits is used if log_softmax is applied prior to

[GitHub] szha commented on issue #7931: MKL-DNN integration: request for reviews

2017-10-19 Thread GitBox
szha commented on issue #7931: MKL-DNN integration: request for reviews URL: https://github.com/apache/incubator-mxnet/pull/7931#issuecomment-338059265 @ykim362 BTW is the fix in mklml_lnx_2018.0.20170908.tgz? Does it make sense to upgrade the library for mkl2017 use case? Many people are

[GitHub] zhreshold commented on issue #8348: mxnet.gluon.data.vision.ImageRecordDataset key error

2017-10-19 Thread GitBox
zhreshold commented on issue #8348: mxnet.gluon.data.vision.ImageRecordDataset key error URL: https://github.com/apache/incubator-mxnet/issues/8348#issuecomment-338058462 Should be fixed in #8353 This is an automated

[GitHub] ykim362 commented on issue #7931: MKL-DNN integration: request for reviews

2017-10-19 Thread GitBox
ykim362 commented on issue #7931: MKL-DNN integration: request for reviews URL: https://github.com/apache/incubator-mxnet/pull/7931#issuecomment-338058204 @szha Sure, I am looking into it. This is an automated message from

[GitHub] nickeleres commented on issue #8350: Incorrect implied shape inside loss function

2017-10-19 Thread GitBox
nickeleres commented on issue #8350: Incorrect implied shape inside loss function URL: https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338058033 I had to set `sparse_label=False` in my loss function, which now looks like:

[GitHub] nickeleres closed issue #8350: Incorrect implied shape inside loss function

2017-10-19 Thread GitBox
nickeleres closed issue #8350: Incorrect implied shape inside loss function URL: https://github.com/apache/incubator-mxnet/issues/8350 This is an automated message from the Apache Git Service. To respond to the message,

[GitHub] szha commented on issue #7931: MKL-DNN integration: request for reviews

2017-10-19 Thread GitBox
szha commented on issue #7931: MKL-DNN integration: request for reviews URL: https://github.com/apache/incubator-mxnet/pull/7931#issuecomment-338057682 @ykim362 could you verify if #8196 is fixed? This is an automated message

[GitHub] zheng-da commented on a change in pull request #7931: MKL-DNN integration: request for reviews

2017-10-19 Thread GitBox
zheng-da commented on a change in pull request #7931: MKL-DNN integration: request for reviews URL: https://github.com/apache/incubator-mxnet/pull/7931#discussion_r145817839 ## File path: src/operator/mkl/mkldnn_elemwise_sum-inl.h ## @@ -0,0 +1,233 @@

[GitHub] zheng-da commented on a change in pull request #7931: MKL-DNN integration: request for reviews

2017-10-19 Thread GitBox
zheng-da commented on a change in pull request #7931: MKL-DNN integration: request for reviews URL: https://github.com/apache/incubator-mxnet/pull/7931#discussion_r145819255 ## File path: src/operator/tensor/elemwise_binary_op_basic.cc ## @@ -23,11 +23,22 @@ */

[GitHub] zheng-da commented on a change in pull request #7931: MKL-DNN integration: request for reviews

2017-10-19 Thread GitBox
zheng-da commented on a change in pull request #7931: MKL-DNN integration: request for reviews URL: https://github.com/apache/incubator-mxnet/pull/7931#discussion_r145817991 ## File path: src/operator/mkl/mkldnn_elemwise_sum-inl.h ## @@ -0,0 +1,233 @@

[GitHub] nickeleres commented on issue #8350: Incorrect implied shape inside loss function

2017-10-19 Thread GitBox
nickeleres commented on issue #8350: Incorrect implied shape inside loss function URL: https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338056483 This is my new loss function: `softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss(from_logits=True)`

[GitHub] nickeleres commented on issue #8350: Incorrect implied shape inside loss function

2017-10-19 Thread GitBox
nickeleres commented on issue #8350: Incorrect implied shape inside loss function URL: https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338056483 This is my new loss function: `softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss(from_logits=True)`

[GitHub] nickeleres commented on issue #8350: Incorrect implied shape inside loss function

2017-10-19 Thread GitBox
nickeleres commented on issue #8350: Incorrect implied shape inside loss function URL: https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338056483 This is my new loss function: `softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss(from_logits=True)`

[GitHub] zhreshold commented on issue #8350: Incorrect implied shape inside loss function

2017-10-19 Thread GitBox
zhreshold commented on issue #8350: Incorrect implied shape inside loss function URL: https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338055150 You can use gluon.loss.SoftmaxCrossEntropyLoss, where you can specify `from_logits=True` to use one_hot labels.

[GitHub] nickeleres commented on issue #8350: Incorrect implied shape inside loss function

2017-10-19 Thread GitBox
nickeleres commented on issue #8350: Incorrect implied shape inside loss function URL: https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338052781 No custom loss function `loss = softmax_cross_entropy(output, label)` It looks like the

[GitHub] nickeleres commented on issue #8350: Incorrect implied shape inside loss function

2017-10-19 Thread GitBox
nickeleres commented on issue #8350: Incorrect implied shape inside loss function URL: https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338054520 When I decrease the batch size to 1, the training labels are simply integers **(which is actually the 0th entry in the

[GitHub] nickeleres commented on issue #8350: Incorrect implied shape inside loss function

2017-10-19 Thread GitBox
nickeleres commented on issue #8350: Incorrect implied shape inside loss function URL: https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338054520 When I decrease the batch size to 1, the training labels are simply integers **(the 0th entry in the one-hot array for each

[GitHub] cjolivier01 commented on issue #8340: Fill optimizations

2017-10-19 Thread GitBox
cjolivier01 commented on issue #8340: Fill optimizations URL: https://github.com/apache/incubator-mxnet/pull/8340#issuecomment-338053030 Ok @szha and I spoke offline. I added the new operator _full to be used for ndarray.full() function. It is tested in test_ndarray.test_outputs()

[GitHub] nickeleres commented on issue #8350: Incorrect implied shape inside loss function

2017-10-19 Thread GitBox
nickeleres commented on issue #8350: Incorrect implied shape inside loss function URL: https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338052781 No custom loss function `loss = softmax_cross_entropy(output, label)` It looks like the

[GitHub] nickeleres commented on issue #8350: Incorrect implied shape inside loss function

2017-10-19 Thread GitBox
nickeleres commented on issue #8350: Incorrect implied shape inside loss function URL: https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338052781 No custom loss function `loss = softmax_cross_entropy(output, label)` It looks like the

[GitHub] zhreshold commented on issue #8350: Incorrect implied shape inside loss function

2017-10-19 Thread GitBox
zhreshold commented on issue #8350: Incorrect implied shape inside loss function URL: https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338051045 Please post your custom loss function This is an automated

[GitHub] zhreshold closed issue #8310: Bug in ./example/

2017-10-19 Thread GitBox
zhreshold closed issue #8310: Bug in ./example/ URL: https://github.com/apache/incubator-mxnet/issues/8310 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the

[GitHub] nickeleres commented on issue #8350: Incorrect implied shape inside loss function

2017-10-19 Thread GitBox
nickeleres commented on issue #8350: Incorrect implied shape inside loss function URL: https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338049715 When I reshaped my label to (32, 1), as the error message stated, I got the training to run, but the alignment between the

[GitHub] zhreshold opened a new pull request #8352: fix using default mean pixels

2017-10-19 Thread GitBox
zhreshold opened a new pull request #8352: fix using default mean pixels URL: https://github.com/apache/incubator-mxnet/pull/8352 ## Description ## (Brief description on what this PR is about) ## Checklist ## ### Essentials ### - [x] Passed code style checking (`make lint`)

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145832223 ## File path: src/ndarray/ndarray.cc ## @@ -558,6 +558,101 @@ void CopyFromTo(const NDArray& from, const

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145833515 ## File path: src/operator/contrib/two_bit_quantize-inl.h ## @@ -0,0 +1,340 @@ +/* + * Licensed to the

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145833486 ## File path: src/operator/contrib/two_bit_quantize-inl.h ## @@ -0,0 +1,340 @@ +/* + * Licensed to the

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145831418 ## File path: src/ndarray/ndarray.cc ## @@ -558,6 +558,101 @@ void CopyFromTo(const NDArray& from, const

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145832475 ## File path: src/ndarray/ndarray.cc ## @@ -558,6 +558,101 @@ void CopyFromTo(const NDArray& from, const

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145833204 ## File path: src/operator/contrib/two_bit_quantize-inl.h ## @@ -0,0 +1,340 @@ +/* + * Licensed to the

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145833629 ## File path: src/operator/contrib/two_bit_quantize-inl.h ## @@ -0,0 +1,340 @@ +/* + * Licensed to the

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145832626 ## File path: src/ndarray/ndarray_function.cc ## @@ -183,5 +184,22 @@ void

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145831125 ## File path: src/ndarray/ndarray.cc ## @@ -558,6 +558,101 @@ void CopyFromTo(const NDArray& from, const

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145831333 ## File path: src/ndarray/ndarray.cc ## @@ -558,6 +558,101 @@ void CopyFromTo(const NDArray& from, const

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145832799 ## File path: src/operator/contrib/two_bit_quantize-inl.h ## @@ -0,0 +1,340 @@ +/* + * Licensed to the

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145832433 ## File path: src/ndarray/ndarray.cc ## @@ -558,6 +558,101 @@ void CopyFromTo(const NDArray& from, const

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145832653 ## File path: src/ndarray/ndarray_function.cu ## @@ -202,5 +203,22 @@ void

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145832885 ## File path: src/operator/contrib/two_bit_quantize-inl.h ## @@ -0,0 +1,340 @@ +/* + * Licensed to the

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145832353 ## File path: src/ndarray/ndarray.cc ## @@ -558,6 +558,101 @@ void CopyFromTo(const NDArray& from, const

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145831668 ## File path: src/ndarray/ndarray.cc ## @@ -558,6 +558,101 @@ void CopyFromTo(const NDArray& from, const

[GitHub] kpot commented on issue #8337: mx.autograd.grad works or fails depending on use of slices

2017-10-19 Thread GitBox
kpot commented on issue #8337: mx.autograd.grad works or fails depending on use of slices URL: https://github.com/apache/incubator-mxnet/issues/8337#issuecomment-338047855 @piiswrong Is there any better way to get the graph/symbol from autograd? Because the method I use seems logical to

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145830977 ## File path: src/kvstore/kvstore_local.h ## @@ -135,6 +135,13 @@ class KVStoreLocal : public KVStore {

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145830977 ## File path: src/kvstore/kvstore_local.h ## @@ -135,6 +135,13 @@ class KVStoreLocal : public KVStore {

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145830772 ## File path: src/kvstore/kvstore_local.h ## @@ -135,6 +135,13 @@ class KVStoreLocal : public KVStore {

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145830615 ## File path: src/kvstore/kvstore_dist_server.h ## @@ -428,20 +468,42 @@ class KVStoreDistServer { }

[GitHub] cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145830311 ## File path: src/kvstore/comm.h ## @@ -79,8 +79,35 @@ class Comm { return pinned_ctx_; } +

[GitHub] piiswrong commented on issue #8337: mx.autograd.grad works or fails depending on use of slices

2017-10-19 Thread GitBox
piiswrong commented on issue #8337: mx.autograd.grad works or fails depending on use of slices URL: https://github.com/apache/incubator-mxnet/issues/8337#issuecomment-338043714 Looks like it might be a bug. I'll look into it. But getting the symbol from a autograd NDArray and

[GitHub] piiswrong commented on issue #8312: Gradient function not returning enough gradients

2017-10-19 Thread GitBox
piiswrong commented on issue #8312: Gradient function not returning enough gradients URL: https://github.com/apache/incubator-mxnet/issues/8312#issuecomment-338040865 Fixed here https://github.com/apache/incubator-mxnet/pull/8322

[GitHub] rahul003 commented on a change in pull request #8342: [WIP] 2bit gradient compression

2017-10-19 Thread GitBox
rahul003 commented on a change in pull request #8342: [WIP] 2bit gradient compression URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145826758 ## File path: python/mxnet/kvstore.py ## @@ -349,6 +349,101 @@ def row_sparse_pull(self, key, out=None,

[GitHub] cjolivier01 opened a new pull request #8351: Allow test to converge

2017-10-19 Thread GitBox
cjolivier01 opened a new pull request #8351: Allow test to converge URL: https://github.com/apache/incubator-mxnet/pull/8351 ## Description ## (Brief description on what this PR is about) ## Checklist ## ### Essentials ### - [ ] Passed code style checking (`make lint`) -

[GitHub] reminisce commented on issue #8292: mx.nd.array indexing broken in armv7 / raspberrypi / jessie 8.0 (5 dimensional tensor)

2017-10-19 Thread GitBox
reminisce commented on issue #8292: mx.nd.array indexing broken in armv7 / raspberrypi / jessie 8.0 (5 dimensional tensor) URL: https://github.com/apache/incubator-mxnet/issues/8292#issuecomment-338034856 @larroy Yes, there is a high chance that something went wrong in the op's backend

[GitHub] cjolivier01 commented on issue #8340: Fill optimizations

2017-10-19 Thread GitBox
cjolivier01 commented on issue #8340: Fill optimizations URL: https://github.com/apache/incubator-mxnet/pull/8340#issuecomment-338025849 'full' on the radar? that struct isn't used. if someone wants to use it, they can add it. I am not going to add an unused kernel.

[GitHub] cjolivier01 commented on issue #8340: Fill optimizations

2017-10-19 Thread GitBox
cjolivier01 commented on issue #8340: Fill optimizations URL: https://github.com/apache/incubator-mxnet/pull/8340#issuecomment-338025849 'full' on the radar? that struct isn't used. if someone wants to use it, they can add it. I am not going to add an unused kernel.

[GitHub] benqua commented on issue #8297: [scala] Make accuracy idependant of output size (fix #8226)

2017-10-19 Thread GitBox
benqua commented on issue #8297: [scala] Make accuracy idependant of output size (fix #8226) URL: https://github.com/apache/incubator-mxnet/pull/8297#issuecomment-338018171 @javelinjs ok, I updated the PR as you suggested. (I should maybe have done a new one because the comment thread

[GitHub] nickeleres commented on issue #8350: Incorrect implied shape inside loss function

2017-10-19 Thread GitBox
nickeleres commented on issue #8350: Incorrect implied shape inside loss function URL: https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338004788 @pluskid closed a similar unresolved issue https://github.com/apache/incubator-mxnet/issues/880

[GitHub] ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, mainly for csr

2017-10-19 Thread GitBox
ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, mainly for csr URL: https://github.com/apache/incubator-mxnet/pull/8259#discussion_r145793132 ## File path: tests/python/unittest/test_sparse_ndarray.py ## @@ -647,7 +647,20 @@ def

[GitHub] nickeleres opened a new issue #8350: Implied shape when calculating loss is incorrect

2017-10-19 Thread GitBox
nickeleres opened a new issue #8350: Implied shape when calculating loss is incorrect URL: https://github.com/apache/incubator-mxnet/issues/8350 Ive seen this brought up in a couple other issues, but it hasnt been resolved as far as I know. The data I am feeding into my loss

[GitHub] nickeleres commented on issue #4045: inferred shape error with FullyConnected layer

2017-10-19 Thread GitBox
nickeleres commented on issue #4045: inferred shape error with FullyConnected layer URL: https://github.com/apache/incubator-mxnet/issues/4045#issuecomment-337983895 I am having the same issue, where it thinks my weight matrix should be shape (N, 1)

[GitHub] larroy commented on issue #8292: mx.nd.array indexing broken in armv7 / raspberrypi / jessie 8.0 (5 dimensional tensor)

2017-10-19 Thread GitBox
larroy commented on issue #8292: mx.nd.array indexing broken in armv7 / raspberrypi / jessie 8.0 (5 dimensional tensor) URL: https://github.com/apache/incubator-mxnet/issues/8292#issuecomment-337971985 I think the problem must be in array slicing inside the native lib. I have debugged

[GitHub] larroy commented on issue #8292: mx.nd.array indexing broken in armv7 / raspberrypi / jessie 8.0 (5 dimensional tensor)

2017-10-19 Thread GitBox
larroy commented on issue #8292: mx.nd.array indexing broken in armv7 / raspberrypi / jessie 8.0 (5 dimensional tensor) URL: https://github.com/apache/incubator-mxnet/issues/8292#issuecomment-337971985 I think the problem must be in array slicing inside the native lib. I have debugged

[GitHub] wzhang1 commented on issue #8335: Performance of MXNet on Windows is lower than that on Linux by 15%-20%

2017-10-19 Thread GitBox
wzhang1 commented on issue #8335: Performance of MXNet on Windows is lower than that on Linux by 15%-20% URL: https://github.com/apache/incubator-mxnet/issues/8335#issuecomment-337967220 I've seen some cudnn example slower in windows than linux, then gave up windows, is this a mxnet

[GitHub] tdomhan commented on issue #8334: Bugfix: Python 3 compatiblity during optimizer serialization.

2017-10-19 Thread GitBox
tdomhan commented on issue #8334: Bugfix: Python 3 compatiblity during optimizer serialization. URL: https://github.com/apache/incubator-mxnet/pull/8334#issuecomment-337964938 sounds good. I updated the PR. This is an

[GitHub] cjolivier01 commented on issue #8343: [CMAKE] Cmake changes, upgrade training test so it converge

2017-10-19 Thread GitBox
cjolivier01 commented on issue #8343: [CMAKE] Cmake changes, upgrade training test so it converge URL: https://github.com/apache/incubator-mxnet/pull/8343#issuecomment-337963496 Triggered a rebuild attempt due to CI problems

[GitHub] ShownX opened a new issue #8349: [New features] bincounts

2017-10-19 Thread GitBox
ShownX opened a new issue #8349: [New features] bincounts URL: https://github.com/apache/incubator-mxnet/issues/8349 Ask for bincount function of ndarray, right now I only can use function from numpy, from #8193 This is an

[GitHub] mseeger commented on issue #8323: clean up math operators

2017-10-19 Thread GitBox
mseeger commented on issue #8323: clean up math operators URL: https://github.com/apache/incubator-mxnet/pull/8323#issuecomment-337948535 Hi Eric, I did some test to confirm that this solution works. Please do merge it in. Otherwise, if you are too busy, I can take it over.

[GitHub] qiliux opened a new issue #8348: mxnet.gluon.data.vision.ImageRecordDataset key error

2017-10-19 Thread GitBox
qiliux opened a new issue #8348: mxnet.gluon.data.vision.ImageRecordDataset key error URL: https://github.com/apache/incubator-mxnet/issues/8348 Note: Providing complete information in the most concise form is the best way to get help. This issue template serves as the checklist for

[GitHub] larroy commented on issue #8231: [MXNet 0.11.0 + RPi 3 + Python 2.7] ndarray unit test fails

2017-10-19 Thread GitBox
larroy commented on issue #8231: [MXNet 0.11.0 + RPi 3 + Python 2.7] ndarray unit test fails URL: https://github.com/apache/incubator-mxnet/issues/8231#issuecomment-337896318 Duplicate of https://github.com/apache/incubator-mxnet/issues/8292

[GitHub] agataradys commented on issue #8291: Import error in SSD example

2017-10-19 Thread GitBox
agataradys commented on issue #8291: Import error in SSD example URL: https://github.com/apache/incubator-mxnet/issues/8291#issuecomment-337885455 @edmBernard Thank you, this solution worked. Are you going to fix this example to work on both Python versions? Do you still support Python2?

[GitHub] novioleo commented on issue #8151: Amalgamation for android arm64 was built successfully but failed to run in device

2017-10-19 Thread GitBox
novioleo commented on issue #8151: Amalgamation for android arm64 was built successfully but failed to run in device URL: https://github.com/apache/incubator-mxnet/issues/8151#issuecomment-337869571 @zhenglaizhang you have to also add indexedRecordIOSplitter into mxnet_predict0.cc

[GitHub] mseeger commented on issue #8338: master branch cannot build on centos 7 with cuda-8.0

2017-10-19 Thread GitBox
mseeger commented on issue #8338: master branch cannot build on centos 7 with cuda-8.0 URL: https://github.com/apache/incubator-mxnet/issues/8338#issuecomment-337862092 The error messages seem to depend on mshadow_op.h only through smooth_l1_gradient. And that code is really independent

[GitHub] jiarenyf commented on issue #8347: CTC Example Problem

2017-10-19 Thread GitBox
jiarenyf commented on issue #8347: CTC Example Problem URL: https://github.com/apache/incubator-mxnet/issues/8347#issuecomment-337843827 @pluskid @thinxer This is an automated message from the Apache Git Service. To respond

[GitHub] gongqiang commented on issue #8189: Feed forward pass memory leaks (using htop)

2017-10-19 Thread GitBox
gongqiang commented on issue #8189: Feed forward pass memory leaks (using htop) URL: https://github.com/apache/incubator-mxnet/issues/8189#issuecomment-337843053 thanks for the info, but my memory still keep increasing when speed is too fast(like 140 sample/s) ^_^ @kazizzad

[GitHub] jiarenyf opened a new issue #8347: CTC Example Problem

2017-10-19 Thread GitBox
jiarenyf opened a new issue #8347: CTC Example Problem URL: https://github.com/apache/incubator-mxnet/issues/8347 Note: Providing complete information in the most concise form is the best way to get help. This issue template serves as the checklist for essential information to most of the

[GitHub] kazizzad commented on issue #8189: Feed forward pass memory leaks (using htop)

2017-10-19 Thread GitBox
kazizzad commented on issue #8189: Feed forward pass memory leaks (using htop) URL: https://github.com/apache/incubator-mxnet/issues/8189#issuecomment-337834849 Hey, are you still running it on the jupyter notebook? if yes, use this jupyter nbconvert --to script

[GitHub] javelinjs commented on issue #8297: [scala] Make accuracy idependant of output size (fix #8226)

2017-10-19 Thread GitBox
javelinjs commented on issue #8297: [scala] Make accuracy idependant of output size (fix #8226) URL: https://github.com/apache/incubator-mxnet/pull/8297#issuecomment-337832673 We can convert it back to Float when calling `EvalMetric.get`

[GitHub] zhenglaizhang commented on issue #8151: Amalgamation for android arm64 was built successfully but failed to run in device

2017-10-19 Thread GitBox
zhenglaizhang commented on issue #8151: Amalgamation for android arm64 was built successfully but failed to run in device URL: https://github.com/apache/incubator-mxnet/issues/8151#issuecomment-337827859 @larroy Hi,sorry for late response, thanks for the help, with the help from other

[GitHub] zhenglaizhang commented on issue #4783: [v0.9.3] Amalgamation for Android broken

2017-10-19 Thread GitBox
zhenglaizhang commented on issue #4783: [v0.9.3] Amalgamation for Android broken URL: https://github.com/apache/incubator-mxnet/issues/4783#issuecomment-337826814 @novioleo yeah, thanks for the info, i succeeded in building the jni so.

[GitHub] gongqiang commented on issue #8189: Feed forward pass memory leaks (using htop)

2017-10-19 Thread GitBox
gongqiang commented on issue #8189: Feed forward pass memory leaks (using htop) URL: https://github.com/apache/incubator-mxnet/issues/8189#issuecomment-337824849 I've got the same problem during training; when there is no 'time gap' between each mod.forward, backward, update; memory usage

[GitHub] zhreshold commented on issue #8325: Fix typo in gluon l1loss docstring

2017-10-19 Thread GitBox
zhreshold commented on issue #8325: Fix typo in gluon l1loss docstring URL: https://github.com/apache/incubator-mxnet/pull/8325#issuecomment-337814203 Please rebase to master to fix the CI This is an automated message from

  1   2   >