[GitHub] II-Matto commented on issue #1356: override global initialization method in layer configuration

2018-04-03 Thread GitBox
II-Matto commented on issue #1356: override global initialization method in layer configuration URL: https://github.com/apache/incubator-mxnet/issues/1356#issuecomment-378214085 @Jing-Luo Hi, I also met the problem of `Mixed` initializer with Gluon `Block`. Have you found any solution

[GitHub] tdomhan commented on a change in pull request #10354: Expose the number of GPUs.

2018-04-03 Thread GitBox
tdomhan commented on a change in pull request #10354: Expose the number of GPUs. URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178812741 ## File path: include/mxnet/base.h ## @@ -316,6 +321,19 @@ inline Context Context::GPU(int32_t dev_id) {

[GitHub] tdomhan commented on a change in pull request #10354: Expose the number of GPUs.

2018-04-03 Thread GitBox
tdomhan commented on a change in pull request #10354: Expose the number of GPUs. URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178812905 ## File path: include/mxnet/base.h ## @@ -316,6 +321,19 @@ inline Context Context::GPU(int32_t dev_id) {

[GitHub] mrRo8o7 opened a new issue #10379: When to use SyncCopyFromCPU and SyncCopyToCPU - C++

2018-04-03 Thread GitBox
mrRo8o7 opened a new issue #10379: When to use SyncCopyFromCPU and SyncCopyToCPU - C++ URL: https://github.com/apache/incubator-mxnet/issues/10379 Hi, i want to ask for the difference between SyncCopyFromCPU and SyncCopyToCPU. I don't get it from the mxnet c++ reference.

[GitHub] tdomhan commented on a change in pull request #10354: Expose the number of GPUs.

2018-04-03 Thread GitBox
tdomhan commented on a change in pull request #10354: Expose the number of GPUs. URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178812137 ## File path: include/mxnet/base.h ## @@ -316,6 +321,19 @@ inline Context Context::GPU(int32_t dev_id) {

[GitHub] solin319 commented on issue #10366: fix bug in sgd

2018-04-03 Thread GitBox
solin319 commented on issue #10366: fix bug in sgd URL: https://github.com/apache/incubator-mxnet/pull/10366#issuecomment-378185953 set MXNET_CPU_TEMP_COPY = 100 When training resnet-50, the sgd_mom_update still can't start directly after fist backward computation.

[GitHub] KellenSunderland commented on issue #10354: Expose the number of GPUs.

2018-04-03 Thread GitBox
KellenSunderland commented on issue #10354: Expose the number of GPUs. URL: https://github.com/apache/incubator-mxnet/pull/10354#issuecomment-378235600 This has been really useful in other projects. I think it'd be great to have a few utility functions exposed in python to tell you a bit

[GitHub] marcoabreu opened a new issue #10380: Flaky test_operator_gpu.test_deconvolution_options

2018-04-03 Thread GitBox
marcoabreu opened a new issue #10380: Flaky test_operator_gpu.test_deconvolution_options URL: https://github.com/apache/incubator-mxnet/issues/10380 http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/master/613/pipeline ```

[GitHub] solin319 opened a new pull request #10381: support profile can be saved to s3

2018-04-03 Thread GitBox
solin319 opened a new pull request #10381: support profile can be saved to s3 URL: https://github.com/apache/incubator-mxnet/pull/10381 We use dmlc::ostream to replace std::ostream. This will make profile file can be saved to s3 file system.

[GitHub] eitan3 commented on issue #9944: MXNet MinGW-w64 build error

2018-04-03 Thread GitBox
eitan3 commented on issue #9944: MXNet MinGW-w64 build error URL: https://github.com/apache/incubator-mxnet/issues/9944#issuecomment-378178379 @lebeg I'm having the same bug.. I'm trying to compile mxnet for Windows 10 from VS2015. without DUSE_CPP_PACKAGE mxnet compile perfectly, with

[GitHub] xinedison opened a new issue #10382: Does memonger work for gluon to save memory?

2018-04-03 Thread GitBox
xinedison opened a new issue #10382: Does memonger work for gluon to save memory? URL: https://github.com/apache/incubator-mxnet/issues/10382 ## Description I want to reduce gpu memory costing when using gluon, I tryed MXNet memonger but it did not work for me, After that I setting

[GitHub] alexmosc commented on issue #9358: Why does running 1 round of an MXNET model training produce Train-mse=NaN?

2018-04-03 Thread GitBox
alexmosc commented on issue #9358: Why does running 1 round of an MXNET model training produce Train-mse=NaN? URL: https://github.com/apache/incubator-mxnet/issues/9358#issuecomment-378198379 Thank you! This is going to be closed as fixed. Alexey

[GitHub] alexmosc closed issue #9358: Why does running 1 round of an MXNET model training produce Train-mse=NaN?

2018-04-03 Thread GitBox
alexmosc closed issue #9358: Why does running 1 round of an MXNET model training produce Train-mse=NaN? URL: https://github.com/apache/incubator-mxnet/issues/9358 This is an automated message from the Apache Git Service. To

[GitHub] tdomhan commented on issue #10354: Expose the number of GPUs.

2018-04-03 Thread GitBox
tdomhan commented on issue #10354: Expose the number of GPUs. URL: https://github.com/apache/incubator-mxnet/pull/10354#issuecomment-378237715 @KellenSunderland I also wasn't entirely sure whether this deserves a place in the core API, but also wasn't sure where else to put it. What would

[GitHub] tdomhan commented on a change in pull request #10354: Expose the number of GPUs.

2018-04-03 Thread GitBox
tdomhan commented on a change in pull request #10354: Expose the number of GPUs. URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178812905 ## File path: include/mxnet/base.h ## @@ -316,6 +321,19 @@ inline Context Context::GPU(int32_t dev_id) {

[GitHub] tdomhan commented on a change in pull request #10354: Expose the number of GPUs.

2018-04-03 Thread GitBox
tdomhan commented on a change in pull request #10354: Expose the number of GPUs. URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178811729 ## File path: python/mxnet/context.py ## @@ -212,6 +216,14 @@ def gpu(device_id=0): return Context('gpu',

[GitHub] tdomhan commented on a change in pull request #10354: Expose the number of GPUs.

2018-04-03 Thread GitBox
tdomhan commented on a change in pull request #10354: Expose the number of GPUs. URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178811842 ## File path: python/mxnet/context.py ## @@ -212,6 +216,14 @@ def gpu(device_id=0): return Context('gpu',

[GitHub] xinyu-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test

2018-04-03 Thread GitBox
xinyu-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-378151474 I have tried the following four tests with seed(1): First two passed: exe1 = y1.simple_bind(**mx.cpu()**, x=shape)

[GitHub] mjpost commented on issue #10205: [Operator] Accelerate the CPU side performance of topk

2018-04-03 Thread GitBox
mjpost commented on issue #10205: [Operator] Accelerate the CPU side performance of topk URL: https://github.com/apache/incubator-mxnet/issues/10205#issuecomment-378152802 @sxjscience --- [the

[GitHub] asitstands commented on issue #10354: Expose the number of GPUs.

2018-04-03 Thread GitBox
asitstands commented on issue #10354: Expose the number of GPUs. URL: https://github.com/apache/incubator-mxnet/pull/10354#issuecomment-378152911 This is great. I needed this, thank you. And, If possible, could I ask you a favor? If you can add functions to query the amount of

[GitHub] mjpost commented on issue #10205: [Operator] Accelerate the CPU side performance of topk

2018-04-03 Thread GitBox
mjpost commented on issue #10205: [Operator] Accelerate the CPU side performance of topk URL: https://github.com/apache/incubator-mxnet/issues/10205#issuecomment-378152802 @sxjscience --- [the

[GitHub] nju-luke closed issue #10368: asscalar is very slow

2018-04-03 Thread GitBox
nju-luke closed issue #10368: asscalar is very slow URL: https://github.com/apache/incubator-mxnet/issues/10368 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and

[GitHub] nju-luke commented on issue #10368: asscalar is very slow

2018-04-03 Thread GitBox
nju-luke commented on issue #10368: asscalar is very slow URL: https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-378137784 The OOM was fixed by changing train_loss with `train_loss += nd.mean(loss_).asscalar()` . Appreciations @reminisce

[GitHub] nju-luke commented on issue #10368: asscalar is very slow

2018-04-03 Thread GitBox
nju-luke commented on issue #10368: asscalar is very slow URL: https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-378137784 The OOM was fixed by changed train_loss with `train_loss += nd.mean(loss_).asscalar()` . Appreciations @reminisce

[GitHub] dabraude commented on issue #10261: [MXNET-128] added load from buffer functions

2018-04-03 Thread GitBox
dabraude commented on issue #10261: [MXNET-128] added load from buffer functions URL: https://github.com/apache/incubator-mxnet/pull/10261#issuecomment-378145556 Hey @cjolivier01 I was wondering if I needed to do something else to get this merged?

[GitHub] yxchng closed issue #8132: How to disable MXNET_CUDNN_AUTOTUNE_DEFAULT and bucketing log message without turning off MXNET_CUDNN_AUTOTUNE_DEFAULT?

2018-04-03 Thread GitBox
yxchng closed issue #8132: How to disable MXNET_CUDNN_AUTOTUNE_DEFAULT and bucketing log message without turning off MXNET_CUDNN_AUTOTUNE_DEFAULT? URL: https://github.com/apache/incubator-mxnet/issues/8132 This is an

[GitHub] yxchng commented on issue #8132: How to disable MXNET_CUDNN_AUTOTUNE_DEFAULT and bucketing log message without turning off MXNET_CUDNN_AUTOTUNE_DEFAULT?

2018-04-03 Thread GitBox
yxchng commented on issue #8132: How to disable MXNET_CUDNN_AUTOTUNE_DEFAULT and bucketing log message without turning off MXNET_CUDNN_AUTOTUNE_DEFAULT? URL: https://github.com/apache/incubator-mxnet/issues/8132#issuecomment-378281644 @lanking520 nope thanks

[GitHub] piiswrong commented on issue #10367: [MXNET-262] Implement mx.random.seed_context to seed random number generators of a specific device context

2018-04-03 Thread GitBox
piiswrong commented on issue #10367: [MXNET-262] Implement mx.random.seed_context to seed random number generators of a specific device context URL: https://github.com/apache/incubator-mxnet/pull/10367#issuecomment-378314193 I still think its better than adding an API. `seed_context`

[GitHub] TaoLv commented on issue #10317: [MXNET-264] Improve performance of MKLDNN in small batch sizes.

2018-04-03 Thread GitBox
TaoLv commented on issue #10317: [MXNET-264] Improve performance of MKLDNN in small batch sizes. URL: https://github.com/apache/incubator-mxnet/pull/10317#issuecomment-378274545 @zheng-da Do you have any performance update of this PR?

[GitHub] ShootingSpace opened a new pull request #10383: allow user to define unknown token symbol

2018-04-03 Thread GitBox
ShootingSpace opened a new pull request #10383: allow user to define unknown token symbol URL: https://github.com/apache/incubator-mxnet/pull/10383 ## Description ## Add new feature for issue [#10068](https://github.com/apache/incubator-mxnet/issues/10068). It allows unknown token to

[GitHub] ShootingSpace closed pull request #10383: allow user to define unknown token symbol

2018-04-03 Thread GitBox
ShootingSpace closed pull request #10383: allow user to define unknown token symbol URL: https://github.com/apache/incubator-mxnet/pull/10383 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As

[GitHub] piiswrong commented on issue #10367: [MXNET-262] Implement mx.random.seed_context to seed random number generators of a specific device context

2018-04-03 Thread GitBox
piiswrong commented on issue #10367: [MXNET-262] Implement mx.random.seed_context to seed random number generators of a specific device context URL: https://github.com/apache/incubator-mxnet/pull/10367#issuecomment-378314952 maybe you can make it defaults to 'all'

[GitHub] haojin2 commented on issue #10379: When to use SyncCopyFromCPU and SyncCopyToCPU - C++

2018-04-03 Thread GitBox
haojin2 commented on issue #10379: When to use SyncCopyFromCPU and SyncCopyToCPU - C++ URL: https://github.com/apache/incubator-mxnet/issues/10379#issuecomment-378320637 @mrRo8o7 The documentation is added but may not be rendered on the webpage for some reason. You can find the

[GitHub] moshelooks opened a new pull request #10384: fix docstring for EvalMetric.update_dict

2018-04-03 Thread GitBox
moshelooks opened a new pull request #10384: fix docstring for EvalMetric.update_dict URL: https://github.com/apache/incubator-mxnet/pull/10384 list of NDArray is not a valid input, it needs to be a mapping from str -> NDArray ## Description ## Trivial docstring fix.

[GitHub] ShootingSpace opened a new pull request #10385: allow user to define unknown token symbol

2018-04-03 Thread GitBox
ShootingSpace opened a new pull request #10385: allow user to define unknown token symbol URL: https://github.com/apache/incubator-mxnet/pull/10385 ## Description ## Add new feature for issue [#10068](https://github.com/apache/incubator-mxnet/issues/10068). It allows unknown token to

[GitHub] sxjscience opened a new issue #10386: [Operator] Support ndim > 3 for batch_dot

2018-04-03 Thread GitBox
sxjscience opened a new issue #10386: [Operator] Support ndim > 3 for batch_dot URL: https://github.com/apache/incubator-mxnet/issues/10386 In batch_dot, i.e, `C = batch_dot(A, B)`, both inputs A & B must have ndim=3. However, in some cases our inputs could have have higher

[GitHub] haojin2 commented on issue #10379: When to use SyncCopyFromCPU and SyncCopyToCPU - C++

2018-04-03 Thread GitBox
haojin2 commented on issue #10379: When to use SyncCopyFromCPU and SyncCopyToCPU - C++ URL: https://github.com/apache/incubator-mxnet/issues/10379#issuecomment-378321886 In general, you would want to use SyncCopyFromCPU/SyncCopyToCPU when you have some contiguous memory region not yet

[incubator-mxnet] branch master updated: [MXNET-235] add axis support and gradient for L2norm (#9740)

2018-04-03 Thread jxie
This is an automated email from the ASF dual-hosted git repository. jxie pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new 62a615d [MXNET-235] add axis support

[GitHub] piiswrong closed pull request #9740: [MXNET-235] add axis support and gradient for L2norm

2018-04-03 Thread GitBox
piiswrong closed pull request #9740: [MXNET-235] add axis support and gradient for L2norm URL: https://github.com/apache/incubator-mxnet/pull/9740 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance:

[incubator-mxnet] branch master updated: [MXNET-256] Add CI Test for GPU (#10346)

2018-04-03 Thread nswamy
This is an automated email from the ASF dual-hosted git repository. nswamy pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new eb94089 [MXNET-256] Add CI Test for

[GitHub] nswamy closed pull request #10346: [MXNET-256] Add CI Test for GPU

2018-04-03 Thread GitBox
nswamy closed pull request #10346: [MXNET-256] Add CI Test for GPU URL: https://github.com/apache/incubator-mxnet/pull/10346 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign

[GitHub] piiswrong commented on a change in pull request #10354: Expose the number of GPUs.

2018-04-03 Thread GitBox
piiswrong commented on a change in pull request #10354: Expose the number of GPUs. URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178910318 ## File path: include/mxnet/base.h ## @@ -316,6 +321,19 @@ inline Context Context::GPU(int32_t dev_id) {

[GitHub] piiswrong commented on a change in pull request #10354: Expose the number of GPUs.

2018-04-03 Thread GitBox
piiswrong commented on a change in pull request #10354: Expose the number of GPUs. URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178910455 ## File path: include/mxnet/base.h ## @@ -316,6 +321,19 @@ inline Context Context::GPU(int32_t dev_id) {

[GitHub] indhub commented on issue #10012: [Bug] There are two broken links on the top level README.md file which point to the old CI

2018-04-03 Thread GitBox
indhub commented on issue #10012: [Bug] There are two broken links on the top level README.md file which point to the old CI URL: https://github.com/apache/incubator-mxnet/issues/10012#issuecomment-378357599 I think these icons were coming from this plugin:

[GitHub] eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op

2018-04-03 Thread GitBox
eric-haibin-lin commented on a change in pull request #10374: Sparse support for Custom Op URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178939597 ## File path: src/operator/custom/custom.cc ## @@ -266,97 +267,237 @@ OpStatePtr CreateState(const

[GitHub] piiswrong closed pull request #10014: [MXNET-81] Fix crash with mx.nd.ones

2018-04-03 Thread GitBox
piiswrong closed pull request #10014: [MXNET-81] Fix crash with mx.nd.ones URL: https://github.com/apache/incubator-mxnet/pull/10014 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a

[incubator-mxnet] branch master updated: [MXNET-81] Fix crash with mx.nd.ones (#10014)

2018-04-03 Thread jxie
This is an automated email from the ASF dual-hosted git repository. jxie pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new a13c46c [MXNET-81] Fix crash with

[GitHub] piiswrong commented on a change in pull request #10354: Expose the number of GPUs.

2018-04-03 Thread GitBox
piiswrong commented on a change in pull request #10354: Expose the number of GPUs. URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178911076 ## File path: python/mxnet/context.py ## @@ -212,6 +216,14 @@ def gpu(device_id=0): return

[GitHub] piiswrong commented on a change in pull request #10354: Expose the number of GPUs.

2018-04-03 Thread GitBox
piiswrong commented on a change in pull request #10354: Expose the number of GPUs. URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178910925 ## File path: include/mxnet/base.h ## @@ -316,6 +321,19 @@ inline Context Context::GPU(int32_t dev_id) {

[GitHub] anirudh2290 opened a new issue #10387: Flaky test(scala): test_arange

2018-04-03 Thread GitBox
anirudh2290 opened a new issue #10387: Flaky test(scala): test_arange URL: https://github.com/apache/incubator-mxnet/issues/10387 ``` *** 1 TEST FAILED *** [INFO] [INFO] Reactor Summary: [INFO]

[GitHub] piiswrong closed pull request #10364: [MXNET-260]remove use_fast_math

2018-04-03 Thread GitBox
piiswrong closed pull request #10364: [MXNET-260]remove use_fast_math URL: https://github.com/apache/incubator-mxnet/pull/10364 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign

[incubator-mxnet] branch master updated: remove use_fast_math (#10364)

2018-04-03 Thread jxie
This is an automated email from the ASF dual-hosted git repository. jxie pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new 47d0b58 remove use_fast_math (#10364)

[GitHub] lanking520 commented on issue #10343: [MXNET-116] Optimized functions with batch input

2018-04-03 Thread GitBox
lanking520 commented on issue #10343: [MXNET-116] Optimized functions with batch input URL: https://github.com/apache/incubator-mxnet/pull/10343#issuecomment-378354689 @marcoabreu Could you please take a look in here. This is an updated PR for testing the APIs on GPU, but it seemed the CI

[GitHub] anirudh2290 commented on a change in pull request #10374: Sparse support for Custom Op

2018-04-03 Thread GitBox
anirudh2290 commented on a change in pull request #10374: Sparse support for Custom Op URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178925301 ## File path: include/mxnet/ndarray.h ## @@ -507,6 +507,35 @@ class NDArray { ret.reuse_ = true;

[GitHub] reminisce commented on issue #10341: Deadlock during ThreadedEnginePerDevice destructor after CuDNNConvolutionOp::SelectAlgo called.

2018-04-03 Thread GitBox
reminisce commented on issue #10341: Deadlock during ThreadedEnginePerDevice destructor after CuDNNConvolutionOp::SelectAlgo called. URL: https://github.com/apache/incubator-mxnet/issues/10341#issuecomment-378360207 I added several logging messages. It seems this function never returns.

[GitHub] cjolivier01 closed pull request #10261: [MXNET-128] added load from buffer functions

2018-04-03 Thread GitBox
cjolivier01 closed pull request #10261: [MXNET-128] added load from buffer functions URL: https://github.com/apache/incubator-mxnet/pull/10261 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As

[incubator-mxnet] branch master updated: [MXNET-128] added load from buffer functions (#10261)

2018-04-03 Thread cjolivier01
This is an automated email from the ASF dual-hosted git repository. cjolivier01 pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new a157d17 [MXNET-128] added load

[GitHub] cjolivier01 commented on issue #10261: [MXNET-128] added load from buffer functions

2018-04-03 Thread GitBox
cjolivier01 commented on issue #10261: [MXNET-128] added load from buffer functions URL: https://github.com/apache/incubator-mxnet/pull/10261#issuecomment-378375590 ok This is an automated message from the Apache Git

[GitHub] rahul003 commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training

2018-04-03 Thread GitBox
rahul003 commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178946311 ## File path: src/kvstore/kvstore_dist_server.h ## @@ -170,43 +216,90 @@ class

[GitHub] rahul003 commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training

2018-04-03 Thread GitBox
rahul003 commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178946606 ## File path: src/kvstore/kvstore_dist_server.h ## @@ -220,175 +313,229 @@ class

[GitHub] cjolivier01 commented on a change in pull request #9400: Fixed memory leak

2018-04-03 Thread GitBox
cjolivier01 commented on a change in pull request #9400: Fixed memory leak URL: https://github.com/apache/incubator-mxnet/pull/9400#discussion_r178947357 ## File path: src/operator/operator_tune-inl.h ## @@ -616,7 +616,7 @@ class UnaryOpTune : public OperatorTune { */

[GitHub] lanking520 commented on issue #10382: Does memonger work for gluon to save memory?

2018-04-03 Thread GitBox
lanking520 commented on issue #10382: Does memonger work for gluon to save memory? URL: https://github.com/apache/incubator-mxnet/issues/10382#issuecomment-378383312 @nswamy please add 'Python', 'performance' on this topic

[GitHub] lanking520 commented on issue #10378: Inconsistent output from mxnet-python and mxnet-scala

2018-04-03 Thread GitBox
lanking520 commented on issue #10378: Inconsistent output from mxnet-python and mxnet-scala URL: https://github.com/apache/incubator-mxnet/issues/10378#issuecomment-378383510 @nswamy please add 'Python', 'Scala' On this topic

[GitHub] lanking520 commented on issue #10369: Proper seeding of the random number generators for multiple CPU threads and multiple GPU devices

2018-04-03 Thread GitBox
lanking520 commented on issue #10369: Proper seeding of the random number generators for multiple CPU threads and multiple GPU devices URL: https://github.com/apache/incubator-mxnet/issues/10369#issuecomment-378383044 @nswamy please add 'C++' tag on this topic

[GitHub] dabraude commented on issue #10261: [MXNET-128] added load from buffer functions

2018-04-03 Thread GitBox
dabraude commented on issue #10261: [MXNET-128] added load from buffer functions URL: https://github.com/apache/incubator-mxnet/pull/10261#issuecomment-378402125 Awesome thanks This is an automated message from the Apache Git

[GitHub] lanking520 commented on issue #10377: Inconsistency between ndarray and symbol when performing division

2018-04-03 Thread GitBox
lanking520 commented on issue #10377: Inconsistency between ndarray and symbol when performing division URL: https://github.com/apache/incubator-mxnet/issues/10377#issuecomment-378383684 @nswamy please add 'Python' to this topic

[GitHub] anirudhacharya commented on issue #8575: mxnet multicore on LInux in R

2018-04-03 Thread GitBox
anirudhacharya commented on issue #8575: mxnet multicore on LInux in R URL: https://github.com/apache/incubator-mxnet/issues/8575#issuecomment-378402307 @shivonkar close this issue if it is resolved. @cjolivier01

[GitHub] snflake commented on a change in pull request #7393: add depthwise convolution's gpu version optimization

2018-04-03 Thread GitBox
snflake commented on a change in pull request #7393: add depthwise convolution's gpu version optimization URL: https://github.com/apache/incubator-mxnet/pull/7393#discussion_r178944575 ## File path: src/operator/convolution.cu ## @@ -45,6 +47,18 @@ Operator*

[GitHub] sxjscience commented on issue #10363: Fix windows setup doc using VS 2017

2018-04-03 Thread GitBox
sxjscience commented on issue #10363: Fix windows setup doc using VS 2017 URL: https://github.com/apache/incubator-mxnet/pull/10363#issuecomment-378401468 Find that the CI is not triggered. May need to rebase and push --force.

[GitHub] eric-haibin-lin commented on a change in pull request #10371: [MXNET-263] [WIP] Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU

2018-04-03 Thread GitBox
eric-haibin-lin commented on a change in pull request #10371: [MXNET-263] [WIP] Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU URL: https://github.com/apache/incubator-mxnet/pull/10371#discussion_r178965893 ## File path: src/operator/tensor/dot.cu ## @@

[GitHub] eric-haibin-lin commented on a change in pull request #10371: [MXNET-263] [WIP] Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU

2018-04-03 Thread GitBox
eric-haibin-lin commented on a change in pull request #10371: [MXNET-263] [WIP] Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU URL: https://github.com/apache/incubator-mxnet/pull/10371#discussion_r178967689 ## File path: src/operator/tensor/dot-inl.cuh ##

[GitHub] cjolivier01 commented on issue #10375: [MXNET-187] GPU fake shuffle functions for Fermi architecture

2018-04-03 Thread GitBox
cjolivier01 commented on issue #10375: [MXNET-187] GPU fake shuffle functions for Fermi architecture URL: https://github.com/apache/incubator-mxnet/pull/10375#issuecomment-378405539 I think I'll change this to just give an error for anything below Kepler architecture

[GitHub] eric-haibin-lin commented on a change in pull request #10371: [MXNET-263] [WIP] Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU

2018-04-03 Thread GitBox
eric-haibin-lin commented on a change in pull request #10371: [MXNET-263] [WIP] Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU URL: https://github.com/apache/incubator-mxnet/pull/10371#discussion_r178964889 ## File path: src/operator/tensor/dot.cu ## @@

[GitHub] rahul003 commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training

2018-04-03 Thread GitBox
rahul003 commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178946210 ## File path: python/mxnet/kvstore.py ## @@ -474,6 +474,8 @@ def set_optimizer(self,

[GitHub] rahul003 commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training

2018-04-03 Thread GitBox
rahul003 commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178946282 ## File path: src/kvstore/kvstore_dist_server.h ## @@ -170,43 +216,90 @@ class

[GitHub] cjolivier01 commented on a change in pull request #9400: Fixed memory leak

2018-04-03 Thread GitBox
cjolivier01 commented on a change in pull request #9400: Fixed memory leak URL: https://github.com/apache/incubator-mxnet/pull/9400#discussion_r178947357 ## File path: src/operator/operator_tune-inl.h ## @@ -616,7 +616,7 @@ class UnaryOpTune : public OperatorTune { */

[GitHub] cjolivier01 commented on a change in pull request #9400: Fixed memory leak

2018-04-03 Thread GitBox
cjolivier01 commented on a change in pull request #9400: Fixed memory leak URL: https://github.com/apache/incubator-mxnet/pull/9400#discussion_r178947357 ## File path: src/operator/operator_tune-inl.h ## @@ -616,7 +616,7 @@ class UnaryOpTune : public OperatorTune { */

[GitHub] anirudh2290 commented on a change in pull request #10374: Sparse support for Custom Op

2018-04-03 Thread GitBox
anirudh2290 commented on a change in pull request #10374: Sparse support for Custom Op URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178966730 ## File path: src/operator/custom/custom.cc ## @@ -266,97 +267,237 @@ OpStatePtr CreateState(const

[GitHub] Roshrini commented on issue #8862: loading resnext-101-64x4d models failed!

2018-04-03 Thread GitBox
Roshrini commented on issue #8862: loading resnext-101-64x4d models failed! URL: https://github.com/apache/incubator-mxnet/issues/8862#issuecomment-378404049 @shipeng-uestc I tried with source build from master today and the model seems to be working fine. Couldn't reproduce the issue

[GitHub] anirudh2290 commented on issue #10389: Report clear errors when opencv::imdecode fails.

2018-04-03 Thread GitBox
anirudh2290 commented on issue #10389: Report clear errors when opencv::imdecode fails. URL: https://github.com/apache/incubator-mxnet/issues/10389#issuecomment-378437945 Currently we don't catch exceptions unless they are dmlc::Error. This would need to change to catch other library

[GitHub] zheng-da commented on issue #10317: [MXNET-264] Improve performance of MKLDNN in small batch sizes.

2018-04-03 Thread GitBox
zheng-da commented on issue #10317: [MXNET-264] Improve performance of MKLDNN in small batch sizes. URL: https://github.com/apache/incubator-mxnet/pull/10317#issuecomment-378439529 batch size | before | after -- | -- | -- AlexNet | 1 | 268.63 | 378.96   | 2 | 431.88 | 585.72

[GitHub] zheng-da commented on issue #10317: [MXNET-264] Improve performance of MKLDNN in small batch sizes.

2018-04-03 Thread GitBox
zheng-da commented on issue #10317: [MXNET-264] Improve performance of MKLDNN in small batch sizes. URL: https://github.com/apache/incubator-mxnet/pull/10317#issuecomment-378439529 batch size | before | after -- | -- | -- AlexNet | 1 | 268.63 | 378.96   | 2 | 431.88 | 585.72

[GitHub] eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-04-03 Thread GitBox
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178997359 ## File path: python/mxnet/ndarray/sparse.py ## @@ -1159,6

[GitHub] eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-04-03 Thread GitBox
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178996192 ## File path: tests/python/unittest/test_sparse_operator.py ## @@

[GitHub] eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-04-03 Thread GitBox
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178997328 ## File path: python/mxnet/ndarray/sparse.py ## @@ -1159,6

[GitHub] eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-04-03 Thread GitBox
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178997144 ## File path: python/mxnet/ndarray/sparse.py ## @@ -1159,6

[GitHub] eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-04-03 Thread GitBox
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178997288 ## File path: python/mxnet/ndarray/sparse.py ## @@ -1159,6

[GitHub] eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-04-03 Thread GitBox
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178998012 ## File path: src/operator/tensor/elemwise_binary_broadcast_op.h ##

[GitHub] zheng-da commented on issue #10317: [MXNET-264] Improve performance of MKLDNN in small batch sizes.

2018-04-03 Thread GitBox
zheng-da commented on issue #10317: [MXNET-264] Improve performance of MKLDNN in small batch sizes. URL: https://github.com/apache/incubator-mxnet/pull/10317#issuecomment-378440323 model | batch size | before | after -- | -- | -- | -- AlexNet | 1 | 268.63 | 378.96   | 2 |

[GitHub] aaronmarkham commented on issue #10307: [MXNET-248] Scala Infer API docs editorial pass

2018-04-03 Thread GitBox
aaronmarkham commented on issue #10307: [MXNET-248] Scala Infer API docs editorial pass URL: https://github.com/apache/incubator-mxnet/pull/10307#issuecomment-378441876 I think it expanded due to my rebase yesterday. The CI failure is scalastyle it seems... I can fix the style by

[GitHub] haojin2 commented on issue #7426: mx random seed doesn't work for random_uniform/random_normal on gpu

2018-04-03 Thread GitBox
haojin2 commented on issue #7426: mx random seed doesn't work for random_uniform/random_normal on gpu URL: https://github.com/apache/incubator-mxnet/issues/7426#issuecomment-378443952 Seems like this issue has already been solved, I cannot reproduce it on the latest

[GitHub] haojin2 commented on issue #7426: mx random seed doesn't work for random_uniform/random_normal on gpu

2018-04-03 Thread GitBox
haojin2 commented on issue #7426: mx random seed doesn't work for random_uniform/random_normal on gpu URL: https://github.com/apache/incubator-mxnet/issues/7426#issuecomment-378443952 Seems like this issue has already been solved, I cannot reproduce it on the latest

[GitHub] pengzhao-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test

2018-04-03 Thread GitBox
pengzhao-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-378445769 @marcoabreu @cjolivier01 @zheng-da I think the conclusion (based on @xinyu-intel 's analysis) is latest MKL-DNN fixed the

[GitHub] eric-haibin-lin opened a new pull request #10388: [MXNET-265] [WIP] Update optimizer doc to clarify wd behaviors

2018-04-03 Thread GitBox
eric-haibin-lin opened a new pull request #10388: [MXNET-265] [WIP] Update optimizer doc to clarify wd behaviors URL: https://github.com/apache/incubator-mxnet/pull/10388 ## Description ## (Brief description on what this PR is about) ## Checklist ## ### Essentials ###

[GitHub] anirudhacharya commented on issue #9804: [R] mx.io.arrayiter shuffing is disabled

2018-04-03 Thread GitBox
anirudhacharya commented on issue #9804: [R] mx.io.arrayiter shuffing is disabled URL: https://github.com/apache/incubator-mxnet/issues/9804#issuecomment-378421390 @hetong007 any update on this. Also there does not seem to be a test for "shuffle=TRUE" case -

[GitHub] marcoabreu commented on issue #10012: [Bug] There are two broken links on the top level README.md file which point to the old CI

2018-04-03 Thread GitBox
marcoabreu commented on issue #10012: [Bug] There are two broken links on the top level README.md file which point to the old CI URL: https://github.com/apache/incubator-mxnet/issues/10012#issuecomment-378421530 @indhub there you go:

[GitHub] anirudhacharya commented on issue #9804: [R] mx.io.arrayiter shuffing is disabled

2018-04-03 Thread GitBox
anirudhacharya commented on issue #9804: [R] mx.io.arrayiter shuffing is disabled URL: https://github.com/apache/incubator-mxnet/issues/9804#issuecomment-378421390 @hetong007 any update on this. Also there does not seem to be a test for "shuffle=TRUE" case -

[GitHub] nswamy commented on issue #10307: [MXNET-248] Scala Infer API docs editorial pass

2018-04-03 Thread GitBox
nswamy commented on issue #10307: [MXNET-248] Scala Infer API docs editorial pass URL: https://github.com/apache/incubator-mxnet/pull/10307#issuecomment-378428264 @aaronmarkham please rebase your branch from the master, this PR has a lot of edits to files that you probably didn't touch

[GitHub] indhub commented on issue #10012: [Bug] There are two broken links on the top level README.md file which point to the old CI

2018-04-03 Thread GitBox
indhub commented on issue #10012: [Bug] There are two broken links on the top level README.md file which point to the old CI URL: https://github.com/apache/incubator-mxnet/issues/10012#issuecomment-378433366 Thanks, I'll create the PR.

[GitHub] rahul003 commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training

2018-04-03 Thread GitBox
rahul003 commented on a change in pull request #10183: [MXNET-120] Float16 support for distributed training URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178946606 ## File path: src/kvstore/kvstore_dist_server.h ## @@ -220,175 +313,229 @@ class

  1   2   >