[GitHub] TaoLv commented on issue #10316: MultiBoxDetection cannot pass consistency check

2018-04-02 Thread GitBox
TaoLv commented on issue #10316: MultiBoxDetection cannot pass consistency check URL: https://github.com/apache/incubator-mxnet/issues/10316#issuecomment-378132736 There is an `atomicAdd` in the [cuda

[GitHub] anirudh2290 commented on issue #10371: [MXNET-263] [WIP] Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU

2018-04-02 Thread GitBox
anirudh2290 commented on issue #10371: [MXNET-263] [WIP] Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU URL: https://github.com/apache/incubator-mxnet/pull/10371#issuecomment-378058175 dot(dns, csr) output a csr ndarray when cpu context is used, and dns ndarray when gpu

[GitHub] xinyu-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test

2018-04-02 Thread GitBox
xinyu-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-378125832 I think the failure of this ut test may be related to this old version of mklml.

[GitHub] xinyu-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test

2018-04-02 Thread GitBox
xinyu-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-378125832 I think the failure of this ut test may be related to this old version of mkl-dnn.

[GitHub] xinyu-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test

2018-04-02 Thread GitBox
xinyu-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-378132307 @zheng-da yes, I've made a mistake. It's mklml not mkldnn:) This

[GitHub] TaoLv commented on a change in pull request #10365: [MXNET-261]Update MKLDNN & Add CPP Test

2018-04-02 Thread GitBox
TaoLv commented on a change in pull request #10365: [MXNET-261]Update MKLDNN & Add CPP Test URL: https://github.com/apache/incubator-mxnet/pull/10365#discussion_r178716118 ## File path: tests/cpp/operator/mkldnn.cc ## @@ -77,4 +77,11 @@ TEST(MKLDNN_UTIL_FUNC, AlignMem) {

[GitHub] anirudhacharya commented on issue #10298: Mxnet not loading

2018-04-02 Thread GitBox
anirudhacharya commented on issue #10298: Mxnet not loading URL: https://github.com/apache/incubator-mxnet/issues/10298#issuecomment-378130716 @FinScience What is the MXNet version you are using? in the meantime, try the following - ``` install.packages("devtools")

[GitHub] anirudhacharya commented on issue #10298: Mxnet not loading

2018-04-02 Thread GitBox
anirudhacharya commented on issue #10298: Mxnet not loading URL: https://github.com/apache/incubator-mxnet/issues/10298#issuecomment-378130716 @FinScience What is the MXNet version you are using? in the meantime, try the following and let me know if it works - ```

[GitHub] zheng-da commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test

2018-04-02 Thread GitBox
zheng-da commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-378129789 @xinyu-intel I guess you mean mklml? This is an automated message

[GitHub] xinyu-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test

2018-04-02 Thread GitBox
xinyu-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-378125832 I think the failure of this ut test may be related to this old version of mkl-dnn.

[GitHub] moveforever commented on issue #10310: MemoryError on linear classification with 400million dimension feature input

2018-04-02 Thread GitBox
moveforever commented on issue #10310: MemoryError on linear classification with 400million dimension feature input URL: https://github.com/apache/incubator-mxnet/issues/10310#issuecomment-378109878 I came across with same error. The reason of the error is to save intermediate

[GitHub] asitstands commented on issue #10367: [MXNET-262] Implement mx.random.seed_context to seed random number generators of a specific device context

2018-04-02 Thread GitBox
asitstands commented on issue #10367: [MXNET-262] Implement mx.random.seed_context to seed random number generators of a specific device context URL: https://github.com/apache/incubator-mxnet/pull/10367#issuecomment-378108174 @piiswrong Omitting `ctx` argument usually means that it is the

[GitHub] asitstands commented on issue #10367: [MXNET-262] Implement mx.random.seed_context to seed random number generators of a specific device context

2018-04-02 Thread GitBox
asitstands commented on issue #10367: [MXNET-262] Implement mx.random.seed_context to seed random number generators of a specific device context URL: https://github.com/apache/incubator-mxnet/pull/10367#issuecomment-378108174 @piiswrong Omiting `ctx` argument usually means that it is the

[GitHub] HeliWang opened a new issue #10378: Inconsistent output from mxnet-python and mxnet-scala

2018-04-02 Thread GitBox
HeliWang opened a new issue #10378: Inconsistent output from mxnet-python and mxnet-scala URL: https://github.com/apache/incubator-mxnet/issues/10378 ## Description Inconsistent output from mxnet-python and mxnet-scala when importing the same mxnet saved model (kim-0100.params,

[GitHub] eric-haibin-lin commented on a change in pull request #10371: [MXNET-263] [WIP] Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10371: [MXNET-263] [WIP] Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU URL: https://github.com/apache/incubator-mxnet/pull/10371#discussion_r178693861 ## File path: src/operator/tensor/dot-inl.h ##

[GitHub] TaoLv commented on a change in pull request #10315: [MXNET-249] Add inplace support to mkldnn sum

2018-04-02 Thread GitBox
TaoLv commented on a change in pull request #10315: [MXNET-249] Add inplace support to mkldnn sum URL: https://github.com/apache/incubator-mxnet/pull/10315#discussion_r178693603 ## File path: src/operator/nn/mkldnn/mkldnn_sum.cc ## @@ -49,23 +49,42 @@ void Sum(const

[GitHub] sxjscience commented on issue #9881: Inconsistent weight decay logics in multiple optimizers

2018-04-02 Thread GitBox
sxjscience commented on issue #9881: Inconsistent weight decay logics in multiple optimizers URL: https://github.com/apache/incubator-mxnet/issues/9881#issuecomment-378099199 Looks good. We can write math formulas instead.

[GitHub] sxjscience commented on issue #10041: Reduce operators do not support axis=None

2018-04-02 Thread GitBox
sxjscience commented on issue #10041: Reduce operators do not support axis=None URL: https://github.com/apache/incubator-mxnet/issues/10041#issuecomment-378098968 `None` should represent "empty-axis" and should perform a global reduction. It's very common in numpy.

[GitHub] fedorzh commented on issue #9410: Training with the same parameters and seed gets significantly different results

2018-04-02 Thread GitBox
fedorzh commented on issue #9410: Training with the same parameters and seed gets significantly different results URL: https://github.com/apache/incubator-mxnet/issues/9410#issuecomment-378098687 @cjolivier01 The function `convert_gluon_dataset_to_numpy` which converts gluon dataset

[GitHub] solin319 commented on issue #10366: fix bug in sgd

2018-04-02 Thread GitBox
solin319 commented on issue #10366: fix bug in sgd URL: https://github.com/apache/incubator-mxnet/pull/10366#issuecomment-378098403 @eric-haibin-lin MXNET_EXEC_NUM_TEMP doesn't work. But make MXNET_CPU_TEMP_COPY and MXNET_GPU_TEMP_COPY larger can solve the overlap problem. It's

[GitHub] anirudh2290 commented on a change in pull request #10374: Sparse support for Custom Op

2018-04-02 Thread GitBox
anirudh2290 commented on a change in pull request #10374: Sparse support for Custom Op URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178691985 ## File path: src/operator/custom/custom.cc ## @@ -266,97 +267,237 @@ OpStatePtr CreateState(const

[GitHub] eric-haibin-lin commented on issue #9881: Inconsistent weight decay logics in multiple optimizers

2018-04-02 Thread GitBox
eric-haibin-lin commented on issue #9881: Inconsistent weight decay logics in multiple optimizers URL: https://github.com/apache/incubator-mxnet/issues/9881#issuecomment-378096575 @szhengac thanks for your inputs. My concern is that if we change the wd behavior now, the change is

[GitHub] eric-haibin-lin commented on issue #8914: The custom operator not supported for group context?

2018-04-02 Thread GitBox
eric-haibin-lin commented on issue #8914: The custom operator not supported for group context? URL: https://github.com/apache/incubator-mxnet/issues/8914#issuecomment-378095174 Verified on my end that the fix works. This

[GitHub] eric-haibin-lin closed issue #8914: The custom operator not supported for group context?

2018-04-02 Thread GitBox
eric-haibin-lin closed issue #8914: The custom operator not supported for group context? URL: https://github.com/apache/incubator-mxnet/issues/8914 This is an automated message from the Apache Git Service. To respond to the

[GitHub] eric-haibin-lin commented on issue #10041: Reduce operators do not support axis=None

2018-04-02 Thread GitBox
eric-haibin-lin commented on issue #10041: Reduce operators do not support axis=None URL: https://github.com/apache/incubator-mxnet/issues/10041#issuecomment-378094996 `b = mx.nd.sum(mx.nd.ones((100, 100))` will work.. I guess the problem is that if user provides axis=None, it won't

[GitHub] eric-haibin-lin commented on issue #10041: Reduce operators do not support axis=None

2018-04-02 Thread GitBox
eric-haibin-lin commented on issue #10041: Reduce operators do not support axis=None URL: https://github.com/apache/incubator-mxnet/issues/10041#issuecomment-378094996 `b = mx.nd.sum(mx.nd.ones((100, 100))` will work.. I guess the problem is that if user provides axis=None, it won't

[GitHub] eric-haibin-lin commented on issue #2317: (info.type) != (kNotInitialized)

2018-04-02 Thread GitBox
eric-haibin-lin commented on issue #2317: (info.type) != (kNotInitialized) URL: https://github.com/apache/incubator-mxnet/issues/2317#issuecomment-378094223 #8055 fixes the example test case @sxjscience provides. I'm gonna close it for now. Feel free to reopen it

[GitHub] eric-haibin-lin closed issue #2317: (info.type) != (kNotInitialized)

2018-04-02 Thread GitBox
eric-haibin-lin closed issue #2317: (info.type) != (kNotInitialized) URL: https://github.com/apache/incubator-mxnet/issues/2317 This is an automated message from the Apache Git Service. To respond to the message, please log

[GitHub] nju-luke commented on issue #10368: asscalar is very slow

2018-04-02 Thread GitBox
nju-luke commented on issue #10368: asscalar is very slow URL: https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-378094033 @reminisce Thanks for the explanation about `asscalar`. I'm sorry that I didn't make it clear about OOM. When I said iteration, it means the

[GitHub] eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178687832 ## File path: include/mxnet/ndarray.h ## @@ -507,6 +507,35 @@ class NDArray { ret.reuse_ = true;

[GitHub] eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178686915 ## File path: src/operator/custom/custom.cc ## @@ -266,97 +267,237 @@ OpStatePtr CreateState(const

[GitHub] eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178686409 ## File path: src/operator/custom/custom.cc ## @@ -266,97 +267,237 @@ OpStatePtr CreateState(const

[GitHub] eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178687188 ## File path: src/operator/custom/custom.cc ## @@ -266,97 +267,237 @@ OpStatePtr CreateState(const

[GitHub] eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178687532 ## File path: tests/python/unittest/test_operator.py ## @@ -4059,6 +4059,79 @@ def

[GitHub] eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178687478 ## File path: tests/python/unittest/test_operator.py ## @@ -4059,6 +4059,79 @@ def

[GitHub] eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178687355 ## File path: src/operator/custom/custom-inl.h ## @@ -64,31 +64,59 @@ class CustomOperator {

[GitHub] eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178687755 ## File path: include/mxnet/ndarray.h ## @@ -507,6 +507,35 @@ class NDArray { ret.reuse_ = true;

[GitHub] eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178669126 ## File path: example/numpy-ops/custom_sparse_sqr.py ## @@ -0,0 +1,115 @@ +# Licensed to the Apache

[GitHub] eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178686280 ## File path: src/operator/operator_common.h ## @@ -314,6 +314,32 @@ inline bool

[GitHub] eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178686494 ## File path: src/operator/custom/custom.cc ## @@ -266,97 +267,237 @@ OpStatePtr CreateState(const

[GitHub] pengzhao-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test

2018-04-02 Thread GitBox
pengzhao-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-378090765 @zheng-da will double check. The tests in python2/3 MKLDNN-CPU passed.

[GitHub] aaronmarkham commented on a change in pull request #10013: [MXNET-48] update on setting up Scala with MXNet and the IntelliJ IDE

2018-04-02 Thread GitBox
aaronmarkham commented on a change in pull request #10013: [MXNET-48] update on setting up Scala with MXNet and the IntelliJ IDE URL: https://github.com/apache/incubator-mxnet/pull/10013#discussion_r178686519 ## File path: docs/tutorials/scala/mxnet_scala_on_intellij.md ##

[incubator-mxnet] branch master updated: [MXNET-72] Improve sparse sgd on GPU (#10293)

2018-04-02 Thread haibin
This is an automated email from the ASF dual-hosted git repository. haibin pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new 5245ef6 [MXNET-72] Improve sparse sgd

[GitHub] eric-haibin-lin closed pull request #10293: [MXNET-72] Improve sparse sgd on GPU

2018-04-02 Thread GitBox
eric-haibin-lin closed pull request #10293: [MXNET-72] Improve sparse sgd on GPU URL: https://github.com/apache/incubator-mxnet/pull/10293 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this

[GitHub] marcoabreu opened a new issue #9000: Flaky test OOM: test_optimizers:test_sgd

2018-04-02 Thread GitBox
marcoabreu opened a new issue #9000: Flaky test OOM: test_optimizers:test_sgd URL: https://github.com/apache/incubator-mxnet/issues/9000 tests/ci_build/ci_build.sh gpu_mklml PYTHONPATH=./python/ nosetests-3.4 --with-timer --verbose tests/python/gpu ``` test_operator_gpu.test_sgd

[GitHub] eric-haibin-lin commented on issue #9000: Flaky test OOM: test_optimizers:test_sgd

2018-04-02 Thread GitBox
eric-haibin-lin commented on issue #9000: Flaky test OOM: test_optimizers:test_sgd URL: https://github.com/apache/incubator-mxnet/issues/9000#issuecomment-378085649 Is this fixed? Looks like the test is not enabled yet This

[GitHub] eric-haibin-lin commented on issue #10285: [MXNET-241] Module API for distributed training w/ row_sparse weight

2018-04-02 Thread GitBox
eric-haibin-lin commented on issue #10285: [MXNET-241] Module API for distributed training w/ row_sparse weight URL: https://github.com/apache/incubator-mxnet/pull/10285#issuecomment-378084842 Added more description to `update` and `prepare` to explain why we need this function and what

[GitHub] szhengac opened a new issue #10377: Inconsistency between ndarray and symbol when performing division

2018-04-02 Thread GitBox
szhengac opened a new issue #10377: Inconsistency between ndarray and symbol when performing division URL: https://github.com/apache/incubator-mxnet/issues/10377 When doing division with ndarray, we can write `x[:] /= F.sum(x, axis=-1)` But with symbol, we need to use `x =

[GitHub] cjolivier01 commented on issue #10375: [MXNET-187] [WIP] fake shuffle functions

2018-04-02 Thread GitBox
cjolivier01 commented on issue #10375: [MXNET-187] [WIP] fake shuffle functions URL: https://github.com/apache/incubator-mxnet/pull/10375#issuecomment-378082454 It may not be worth supporting Fermi architecture This is an

[GitHub] cjolivier01 commented on issue #9410: Training with the same parameters and seed gets significantly different results

2018-04-02 Thread GitBox
cjolivier01 commented on issue #9410: Training with the same parameters and seed gets significantly different results URL: https://github.com/apache/incubator-mxnet/issues/9410#issuecomment-378081463 Is this supposed to take a really long time to run? It takes many minutes...

[GitHub] eric-haibin-lin commented on issue #10373: Adding a BSD file to LICENSE

2018-04-02 Thread GitBox
eric-haibin-lin commented on issue #10373: Adding a BSD file to LICENSE URL: https://github.com/apache/incubator-mxnet/pull/10373#issuecomment-378078762 Thanks for fixing this. This is an automated message from the Apache

[incubator-mxnet] branch master updated: Adding a file to LICENSE (#10373)

2018-04-02 Thread haibin
This is an automated email from the ASF dual-hosted git repository. haibin pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new 6dd85e2 Adding a file to LICENSE

[GitHub] eric-haibin-lin closed pull request #10373: Adding a BSD file to LICENSE

2018-04-02 Thread GitBox
eric-haibin-lin closed pull request #10373: Adding a BSD file to LICENSE URL: https://github.com/apache/incubator-mxnet/pull/10373 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a

[GitHub] lanking520 commented on a change in pull request #10346: [MXNET-256] Add CI Test for GPU

2018-04-02 Thread GitBox
lanking520 commented on a change in pull request #10346: [MXNET-256] Add CI Test for GPU URL: https://github.com/apache/incubator-mxnet/pull/10346#discussion_r178676143 ## File path: ci/docker/runtime_functions.sh ## @@ -427,6 +427,12 @@ unittest_ubuntu_cpu_scala() {

[GitHub] eric-haibin-lin commented on a change in pull request #10013: [MXNET-48] update on setting up Scala with MXNet and the IntelliJ IDE

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10013: [MXNET-48] update on setting up Scala with MXNet and the IntelliJ IDE URL: https://github.com/apache/incubator-mxnet/pull/10013#discussion_r178675676 ## File path: docs/tutorials/scala/mxnet_scala_on_intellij.md

[GitHub] szha commented on a change in pull request #10354: Expose the number of GPUs.

2018-04-02 Thread GitBox
szha commented on a change in pull request #10354: Expose the number of GPUs. URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178675441 ## File path: python/mxnet/context.py ## @@ -212,6 +216,14 @@ def gpu(device_id=0): return Context('gpu',

[incubator-mxnet] branch master updated: [MXNET-146] Docs build updates: added some deps; clarified developer builds (#10270)

2018-04-02 Thread haibin
This is an automated email from the ASF dual-hosted git repository. haibin pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new c3c2676 [MXNET-146] Docs build

[GitHub] eric-haibin-lin closed pull request #10270: [MXNET-146] Docs build updates: added some deps; clarified developer builds

2018-04-02 Thread GitBox
eric-haibin-lin closed pull request #10270: [MXNET-146] Docs build updates: added some deps; clarified developer builds URL: https://github.com/apache/incubator-mxnet/pull/10270 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed

[GitHub] szha commented on issue #10360: extend ndarray in-place reshape

2018-04-02 Thread GitBox
szha commented on issue #10360: extend ndarray in-place reshape URL: https://github.com/apache/incubator-mxnet/pull/10360#issuecomment-378076559 Doc can be found at

[GitHub] haojin2 commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-04-02 Thread GitBox
haojin2 commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178673220 ## File path: src/operator/tensor/elemwise_binary_broadcast_op_basic.cc ##

[GitHub] indhub closed pull request #10283: [MXNET-242][Tutorial] Fine-tuning ONNX model in Gluon

2018-04-02 Thread GitBox
indhub closed pull request #10283: [MXNET-242][Tutorial] Fine-tuning ONNX model in Gluon URL: https://github.com/apache/incubator-mxnet/pull/10283 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance:

[incubator-mxnet] branch master updated: [MXNET-242][Tutorial] Fine-tuning ONNX model in Gluon (#10283)

2018-04-02 Thread indhub
This is an automated email from the ASF dual-hosted git repository. indhub pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new 224b0c9 [MXNET-242][Tutorial]

[GitHub] eric-haibin-lin commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178665723 ## File path: src/kvstore/kvstore_dist_server.h ## @@ -170,43 +216,90 @@

[GitHub] eric-haibin-lin commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178665130 ## File path: src/kvstore/kvstore_dist_server.h ## @@ -170,43 +216,90 @@

[GitHub] eric-haibin-lin commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178665347 ## File path: src/kvstore/kvstore_dist_server.h ## @@ -170,43 +216,90 @@

[GitHub] eric-haibin-lin commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178666240 ## File path: src/kvstore/kvstore_dist_server.h ## @@ -220,175 +313,229 @@

[GitHub] eric-haibin-lin commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178646489 ## File path: python/mxnet/kvstore.py ## @@ -474,6 +474,8 @@ def

[GitHub] cjolivier01 commented on issue #9632: Support pre-kepler GPUs without __shfl_down instruction

2018-04-02 Thread GitBox
cjolivier01 commented on issue #9632: Support pre-kepler GPUs without __shfl_down instruction URL: https://github.com/apache/incubator-mxnet/issues/9632#issuecomment-378064596 https://github.com/apache/incubator-mxnet/pull/10375

[GitHub] cjolivier01 commented on issue #9632: Support pre-kepler GPUs without __shfl_down instruction

2018-04-02 Thread GitBox
cjolivier01 commented on issue #9632: Support pre-kepler GPUs without __shfl_down instruction URL: https://github.com/apache/incubator-mxnet/issues/9632#issuecomment-378064596 PR: https://github.com/apache/incubator-mxnet/pull/10375

[GitHub] cjolivier01 commented on issue #10375: [MXNET-187] [WIP] fake shuffle functions

2018-04-02 Thread GitBox
cjolivier01 commented on issue #10375: [MXNET-187] [WIP] fake shuffle functions URL: https://github.com/apache/incubator-mxnet/pull/10375#issuecomment-378064463 https://issues.apache.org/jira/browse/MXNET-187 This is an

[GitHub] eric-haibin-lin commented on issue #10366: fix bug in sgd

2018-04-02 Thread GitBox
eric-haibin-lin commented on issue #10366: fix bug in sgd URL: https://github.com/apache/incubator-mxnet/pull/10366#issuecomment-378062143 Would setting MXNET_EXEC_NUM_TEMP help? @solin319 https://github.com/apache/incubator-mxnet/blob/master/docs/faq/env_var.md#memory-options

[GitHub] marcoabreu opened a new issue #10376: Flaky test_gluon.test_lambda

2018-04-02 Thread GitBox
marcoabreu opened a new issue #10376: Flaky test_gluon.test_lambda URL: https://github.com/apache/incubator-mxnet/issues/10376 http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-10373/1/pipeline ```

[incubator-mxnet] branch master updated: improve sparse.adagrad on GPU (#10312)

2018-04-02 Thread haibin
This is an automated email from the ASF dual-hosted git repository. haibin pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new eaa954c improve sparse.adagrad on GPU

[GitHub] eric-haibin-lin closed pull request #10312: [MXNET-72] improve sparse adagrad on GPU

2018-04-02 Thread GitBox
eric-haibin-lin closed pull request #10312: [MXNET-72] improve sparse adagrad on GPU URL: https://github.com/apache/incubator-mxnet/pull/10312 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As

[GitHub] rahul003 commented on issue #10283: [MXNET-242][Tutorial] Fine-tuning ONNX model in Gluon

2018-04-02 Thread GitBox
rahul003 commented on issue #10283: [MXNET-242][Tutorial] Fine-tuning ONNX model in Gluon URL: https://github.com/apache/incubator-mxnet/pull/10283#issuecomment-378060200 The issue was likely with download, unable to reproduce the issue after fresh download. Sorry for the confusion.

[GitHub] haojin2 commented on issue #10371: [MXNET-263] [WIP] Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU

2018-04-02 Thread GitBox
haojin2 commented on issue #10371: [MXNET-263] [WIP] Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU URL: https://github.com/apache/incubator-mxnet/pull/10371#issuecomment-378059763 @anirudh2290 Corresponding documentations will be added once the implementations and tests

[GitHub] mbaijal commented on issue #10330: [Post 1.1][WIP] Couple of License Issues from 1.1 Release

2018-04-02 Thread GitBox
mbaijal commented on issue #10330: [Post 1.1][WIP] Couple of License Issues from 1.1 Release URL: https://github.com/apache/incubator-mxnet/issues/10330#issuecomment-378059812 a. Fixed in PR #10373 b. Needs some investigation on best fix since we cant modify 3rdparty submodule

[GitHub] nswamy commented on a change in pull request #10346: [MXNET-256] Add CI Test for GPU

2018-04-02 Thread GitBox
nswamy commented on a change in pull request #10346: [MXNET-256] Add CI Test for GPU URL: https://github.com/apache/incubator-mxnet/pull/10346#discussion_r178661422 ## File path: ci/docker/runtime_functions.sh ## @@ -427,6 +427,12 @@ unittest_ubuntu_cpu_scala() {

[GitHub] anirudh2290 commented on issue #10371: [MXNET-263] [WIP] Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU

2018-04-02 Thread GitBox
anirudh2290 commented on issue #10371: [MXNET-263] [WIP] Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU URL: https://github.com/apache/incubator-mxnet/pull/10371#issuecomment-378058175 Will dot(dns, csr) output a csr ndarray when cpu context is used, but will output dns

[GitHub] anirudh2290 commented on issue #10371: [MXNET-263] [WIP] Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU

2018-04-02 Thread GitBox
anirudh2290 commented on issue #10371: [MXNET-263] [WIP] Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU URL: https://github.com/apache/incubator-mxnet/pull/10371#issuecomment-378058175 Will dot(dns, csr) output a csr ndarray when cpu context is used, but will output dns

[GitHub] anirudh2290 commented on issue #10014: [MXNET-81] Fix crash with mx.nd.ones

2018-04-02 Thread GitBox
anirudh2290 commented on issue #10014: [MXNET-81] Fix crash with mx.nd.ones URL: https://github.com/apache/incubator-mxnet/pull/10014#issuecomment-378055652 @piiswrong Thanks for your review! Is this good to merge ? This is

[GitHub] anirudh2290 opened a new pull request #10374: Sparse support for Custom Op

2018-04-02 Thread GitBox
anirudh2290 opened a new pull request #10374: Sparse support for Custom Op URL: https://github.com/apache/incubator-mxnet/pull/10374 ## Description ## Adds sparse support for custom op. Registers InferStorageType and InferStorageTypeBackward interface for custom op. registers Forward

[GitHub] cjolivier01 commented on issue #9632: Support pre-kepler GPUs without __shfl_down instruction

2018-04-02 Thread GitBox
cjolivier01 commented on issue #9632: Support pre-kepler GPUs without __shfl_down instruction URL: https://github.com/apache/incubator-mxnet/issues/9632#issuecomment-378048821 It appears that pytorch and TF don;t support Fermi GPUs. Do we wish to continue support?

[GitHub] cjolivier01 commented on issue #9632: Support pre-kepler GPUs without __shfl_down instruction

2018-04-02 Thread GitBox
cjolivier01 commented on issue #9632: Support pre-kepler GPUs without __shfl_down instruction URL: https://github.com/apache/incubator-mxnet/issues/9632#issuecomment-378048821 It appears that pytorch and TF don't support Fermi GPUs. Do we wish to continue support?

[GitHub] zheng-da commented on issue #10317: [MXNET-264] Improve performance of MKLDNN in small batch sizes.

2018-04-02 Thread GitBox
zheng-da commented on issue #10317: [MXNET-264] Improve performance of MKLDNN in small batch sizes. URL: https://github.com/apache/incubator-mxnet/pull/10317#issuecomment-378047340 Could you please review this PR? @piiswrong @pengzhao-intel @TaoLv

[GitHub] cjolivier01 commented on issue #7848: __shfl_down is undefined. Is the end of CUDA 2.1 support?

2018-04-02 Thread GitBox
cjolivier01 commented on issue #7848: __shfl_down is undefined. Is the end of CUDA 2.1 support? URL: https://github.com/apache/incubator-mxnet/issues/7848#issuecomment-378046738 Redirect to: https://github.com/apache/incubator-mxnet/issues/9632

[GitHub] cjolivier01 closed issue #7848: __shfl_down is undefined. Is the end of CUDA 2.1 support?

2018-04-02 Thread GitBox
cjolivier01 closed issue #7848: __shfl_down is undefined. Is the end of CUDA 2.1 support? URL: https://github.com/apache/incubator-mxnet/issues/7848 This is an automated message from the Apache Git Service. To respond to

[GitHub] mbaijal opened a new pull request #10373: Adding a BSD file to LICENSE

2018-04-02 Thread GitBox
mbaijal opened a new pull request #10373: Adding a BSD file to LICENSE URL: https://github.com/apache/incubator-mxnet/pull/10373 ## Description ## @eric-haibin-lin @marcoabreu Please review and merge Adding a file to top Level LICENSE file as per Issue #10330 part a and a couple of

[GitHub] eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178644856 ## File path: python/mxnet/ndarray/sparse.py ## @@ -1159,6

[GitHub] eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178645029 ## File path: python/mxnet/ndarray/sparse.py ## @@ -1159,6

[GitHub] eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178644670 ## File path: python/mxnet/ndarray/sparse.py ## @@ -1159,6

[GitHub] lanking520 commented on issue #8132: How to disable MXNET_CUDNN_AUTOTUNE_DEFAULT and bucketing log message without turning off MXNET_CUDNN_AUTOTUNE_DEFAULT?

2018-04-02 Thread GitBox
lanking520 commented on issue #8132: How to disable MXNET_CUDNN_AUTOTUNE_DEFAULT and bucketing log message without turning off MXNET_CUDNN_AUTOTUNE_DEFAULT? URL: https://github.com/apache/incubator-mxnet/issues/8132#issuecomment-378039215 @yxchng , is this still an issue for you?

[GitHub] eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178645100 ## File path: python/mxnet/ndarray/sparse.py ## @@ -1159,6

[GitHub] eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178644551 ## File path: python/mxnet/ndarray/sparse.py ## @@ -1159,6

[GitHub] eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178646005 ## File path: src/operator/tensor/elemwise_binary_broadcast_op_basic.cc

[GitHub] anirudhacharya commented on issue #9702: [Post 1.1.0] Apply PR #9701 to the master branch

2018-04-02 Thread GitBox
anirudhacharya commented on issue #9702: [Post 1.1.0] Apply PR #9701 to the master branch URL: https://github.com/apache/incubator-mxnet/issues/9702#issuecomment-378037920 @mbaijal close this issue, if it is resolved. This

[GitHub] haojin2 commented on issue #10312: [MXNET-72] improve sparse adagrad on GPU

2018-04-02 Thread GitBox
haojin2 commented on issue #10312: [MXNET-72] improve sparse adagrad on GPU URL: https://github.com/apache/incubator-mxnet/pull/10312#issuecomment-378035280 LGTM! This is an automated message from the Apache Git Service. To

[GitHub] charlieyou commented on issue #10073: NaN in loss when using gluon ELU block

2018-04-02 Thread GitBox
charlieyou commented on issue #10073: NaN in loss when using gluon ELU block URL: https://github.com/apache/incubator-mxnet/issues/10073#issuecomment-378035047 @dmas-at-wiris Is this still an issue for you? If so, could you provide an MWE? Thanks!

[GitHub] szha commented on issue #10345: allow block setattr to reset the prefix when setting new block

2018-04-02 Thread GitBox
szha commented on issue #10345: allow block setattr to reset the prefix when setting new block URL: https://github.com/apache/incubator-mxnet/pull/10345#issuecomment-378030956 If `net2.block1 = net1.block1` is a hack, what's the intended use case for `__setattr__` for blocks when there's

[GitHub] szha commented on issue #10345: allow block setattr to reset the prefix when setting new block

2018-04-02 Thread GitBox
szha commented on issue #10345: allow block setattr to reset the prefix when setting new block URL: https://github.com/apache/incubator-mxnet/pull/10345#issuecomment-378030956 If `net2.block1 = net1.block1` is a hack, what's the intended use case for `__setattr__` for blocks?

  1   2   >