[GitHub] [incubator-mxnet] ZhennanQin closed issue #12681: Batch_norm parameter names mismatch on gluon

2019-11-06 Thread GitBox
ZhennanQin closed issue #12681: Batch_norm parameter names mismatch on gluon URL: https://github.com/apache/incubator-mxnet/issues/12681 This is an automated message from the Apache Git Service. To respond to the message,

[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #16748: Fix SliceChannel Type inference

2019-11-06 Thread GitBox
ChaiBapchya commented on a change in pull request #16748: Fix SliceChannel Type inference URL: https://github.com/apache/incubator-mxnet/pull/16748#discussion_r343501965 ## File path: tests/python/gpu/test_contrib_amp.py ## @@ -475,6 +475,15 @@ def test_fp16_casting():

[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #16748: Fix SliceChannel Type inference

2019-11-06 Thread GitBox
ChaiBapchya commented on a change in pull request #16748: Fix SliceChannel Type inference URL: https://github.com/apache/incubator-mxnet/pull/16748#discussion_r343501689 ## File path: src/operator/slice_channel-inl.h ## @@ -176,16 +177,22 @@ class SliceChannelProp :

[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-11-06 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 7a288d4 Bump the

[GitHub] [incubator-mxnet] sxjscience edited a comment on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-06 Thread GitBox
sxjscience edited a comment on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-550843566 @szha If these operators are executed in the bulk mode there will be no StreamSynchrnoize

[GitHub] [incubator-mxnet] sxjscience edited a comment on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-06 Thread GitBox
sxjscience edited a comment on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-550843566 @szha If these operators are executed in the bulk mode there will be no StreamSynchrnoize

[GitHub] [incubator-mxnet] sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-06 Thread GitBox
sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-550843566 @szha If these operators are executed in the bulk mode there will be StreamSynchrnoize in-between. I'm

[GitHub] [incubator-mxnet] szha commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-06 Thread GitBox
szha commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-550801740 > I've checked the source code. The new approach should be fine as long as we use `cudaMemsetAsync` for

[GitHub] [incubator-mxnet] sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-06 Thread GitBox
sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-550788498 I guess the main purpose is to accelerate the speed of initializing a huge amount of NDArrays. Adding a

[GitHub] [incubator-mxnet] sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-06 Thread GitBox
sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-550755369 The reset_array was introduced in https://github.com/apache/incubator-mxnet/pull/16446. May be we should

[GitHub] [incubator-mxnet] sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-06 Thread GitBox
sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-550740119 Because we need to use a[:]=0 for the original ndarray and use a[()] = 0 for the new numpy ndarray, we

[GitHub] [incubator-mxnet] wuxun-zhang commented on a change in pull request #16184: Add large tensor nightly tests for MKL-DNN operators

2019-11-06 Thread GitBox
wuxun-zhang commented on a change in pull request #16184: Add large tensor nightly tests for MKL-DNN operators URL: https://github.com/apache/incubator-mxnet/pull/16184#discussion_r343474011 ## File path: tests/nightly/test_large_array.py ## @@ -944,11 +997,14 @@ def

[GitHub] [incubator-mxnet] wuxun-zhang commented on a change in pull request #16184: Add large tensor nightly tests for MKL-DNN operators

2019-11-06 Thread GitBox
wuxun-zhang commented on a change in pull request #16184: Add large tensor nightly tests for MKL-DNN operators URL: https://github.com/apache/incubator-mxnet/pull/16184#discussion_r343473989 ## File path: tests/nightly/test_large_array.py ## @@ -782,8 +801,30 @@ def

[GitHub] [incubator-mxnet] ZhennanQin commented on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed

2019-11-06 Thread GitBox
ZhennanQin commented on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed URL: https://github.com/apache/incubator-mxnet/issues/16424#issuecomment-550728914 With https://github.com/apache/incubator-mxnet/pull/16734 merged, the computation error

[incubator-mxnet] branch master updated (d967be9 -> c38b527)

2019-11-06 Thread patriczhao
This is an automated email from the ASF dual-hosted git repository. patriczhao pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from d967be9 [BUG FIX] Always preserve batch dimension in batches returned from dataloader (#16233)

[incubator-mxnet] branch master updated (d967be9 -> c38b527)

2019-11-06 Thread patriczhao
This is an automated email from the ASF dual-hosted git repository. patriczhao pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from d967be9 [BUG FIX] Always preserve batch dimension in batches returned from dataloader (#16233)

[GitHub] [incubator-mxnet] pengzhao-intel merged pull request #16734: [MKLDNN] Fix int8 convolution/fc bias overflow

2019-11-06 Thread GitBox
pengzhao-intel merged pull request #16734: [MKLDNN] Fix int8 convolution/fc bias overflow URL: https://github.com/apache/incubator-mxnet/pull/16734 This is an automated message from the Apache Git Service. To respond to the

[GitHub] [incubator-mxnet] suzhengpeng commented on issue #1161: ImpportError: No module named skimage when running Neural-style example

2019-11-06 Thread GitBox
suzhengpeng commented on issue #1161: ImpportError: No module named skimage when running Neural-style example URL: https://github.com/apache/incubator-mxnet/issues/1161#issuecomment-550679270 i got the same problem, how deal with it???

[incubator-mxnet] branch master updated (ecb7a3a -> d967be9)

2019-11-06 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository. zhasheng pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from ecb7a3a Update submodule dmlc-core (#16742) add d967be9 [BUG FIX] Always preserve batch dimension

[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-06 Thread GitBox
sxjscience commented on a change in pull request #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#discussion_r343460128 ## File path: python/mxnet/gluon/parameter.py ## @@ -904,7 +904,11 @@

[incubator-mxnet] branch master updated (ecb7a3a -> d967be9)

2019-11-06 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository. zhasheng pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from ecb7a3a Update submodule dmlc-core (#16742) add d967be9 [BUG FIX] Always preserve batch dimension

[GitHub] [incubator-mxnet] szha merged pull request #16233: [BUG FIX] Always preserve batch dimension in batches returned from dataloader

2019-11-06 Thread GitBox
szha merged pull request #16233: [BUG FIX] Always preserve batch dimension in batches returned from dataloader URL: https://github.com/apache/incubator-mxnet/pull/16233 This is an automated message from the Apache Git

[GitHub] [incubator-mxnet] szha commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-06 Thread GitBox
szha commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-550605652 > Shall we move away from reset_array in the old ndarary too? @sxjscience This concern is not addressed

[GitHub] [incubator-mxnet] sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-06 Thread GitBox
sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-550603528 @reminisce @szha I've added the test. Should be ready for review

[GitHub] [incubator-mxnet] iris0329 commented on issue #15492: No CMAKE_CUDA_COMPILER could be found

2019-11-06 Thread GitBox
iris0329 commented on issue #15492: No CMAKE_CUDA_COMPILER could be found URL: https://github.com/apache/incubator-mxnet/issues/15492#issuecomment-550601840 @SpaceView on ubuntu, I add ``` set(CMAKE_CUDA_COMPILER "/usr/local/cuda-9.0/bin/nvcc") ``` solved this pro !

[GitHub] [incubator-mxnet] wuxun-zhang commented on issue #16737: [MKLDNN] use dim_t instead of int in slice/transpose operators

2019-11-06 Thread GitBox
wuxun-zhang commented on issue #16737: [MKLDNN] use dim_t instead of int in slice/transpose operators URL: https://github.com/apache/incubator-mxnet/pull/16737#issuecomment-550591145 @pengzhao-intel @ZhennanQin Now, except for these three ops, no similar issue is found in other mkldnn

[incubator-mxnet] branch master updated (da33da3 -> ecb7a3a)

2019-11-06 Thread anirudh2290
This is an automated email from the ASF dual-hosted git repository. anirudh2290 pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from da33da3 Add MXNet Ops for fast multihead attention (#16408) add ecb7a3a Update submodule

[GitHub] [incubator-mxnet] anirudh2290 merged pull request #16742: Update submodule dmlc-core

2019-11-06 Thread GitBox
anirudh2290 merged pull request #16742: Update submodule dmlc-core URL: https://github.com/apache/incubator-mxnet/pull/16742 This is an automated message from the Apache Git Service. To respond to the message, please log on

[GitHub] [incubator-mxnet] anirudh2290 opened a new pull request #16748: Fix SliceChannel Type inference

2019-11-06 Thread GitBox
anirudh2290 opened a new pull request #16748: Fix SliceChannel Type inference URL: https://github.com/apache/incubator-mxnet/pull/16748 ## Description ## Fix SliceChannel Type Inference. Do forward and backward inference for slice channel with ElemwiseAttr logic. Remove exception thrown

[incubator-mxnet] branch master updated (58b824f -> da33da3)

2019-11-06 Thread haibin
This is an automated email from the ASF dual-hosted git repository. haibin pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 58b824f fix R docs (#16733) add da33da3 Add MXNet Ops for fast multihead attention (#16408) No new

[incubator-mxnet] branch master updated (58b824f -> da33da3)

2019-11-06 Thread haibin
This is an automated email from the ASF dual-hosted git repository. haibin pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 58b824f fix R docs (#16733) add da33da3 Add MXNet Ops for fast multihead attention (#16408) No new

[GitHub] [incubator-mxnet] DickJC123 commented on issue #16408: Add MXNet Ops for fast multihead attention

2019-11-06 Thread GitBox
DickJC123 commented on issue #16408: Add MXNet Ops for fast multihead attention URL: https://github.com/apache/incubator-mxnet/pull/16408#issuecomment-550585284 Thanks Tao. Looking foward to working with you and others on MXNet's MHA definition.

[GitHub] [incubator-mxnet] eric-haibin-lin merged pull request #16408: Add MXNet Ops for fast multihead attention

2019-11-06 Thread GitBox
eric-haibin-lin merged pull request #16408: Add MXNet Ops for fast multihead attention URL: https://github.com/apache/incubator-mxnet/pull/16408 This is an automated message from the Apache Git Service. To respond to the

[GitHub] [incubator-mxnet] sxjscience commented on issue #16747: Fused Op causes MXNetError

2019-11-06 Thread GitBox
sxjscience commented on issue #16747: Fused Op causes MXNetError URL: https://github.com/apache/incubator-mxnet/issues/16747#issuecomment-550584121 @zhreshold This is an automated message from the Apache Git Service. To

[GitHub] [incubator-mxnet] sxjscience commented on issue #16747: Fused Op causes MXNetError

2019-11-06 Thread GitBox
sxjscience commented on issue #16747: Fused Op causes MXNetError URL: https://github.com/apache/incubator-mxnet/issues/16747#issuecomment-550583532 I suggest turn the fused_op off by default in the 1.6.0 release and announce it as experimental feature, or revert the PR. @szha

[GitHub] [incubator-mxnet] TaoLv commented on issue #16408: Add MXNet Ops for fast multihead attention

2019-11-06 Thread GitBox
TaoLv commented on issue #16408: Add MXNet Ops for fast multihead attention URL: https://github.com/apache/incubator-mxnet/pull/16408#issuecomment-550583201 Thanks for your response @Caenorst. Looking forward to your general proposal for cuDNN MHA integration. Now I'm withdrawing the

[GitHub] [incubator-mxnet] leezu commented on issue #16747: Fused Op causes MXNetError

2019-11-06 Thread GitBox
leezu commented on issue #16747: Fused Op causes MXNetError URL: https://github.com/apache/incubator-mxnet/issues/16747#issuecomment-550579206 @ptrendx This is an automated message from the Apache Git Service. To respond to

[GitHub] [incubator-mxnet] leezu opened a new issue #16747: Fused Op causes MXNetError

2019-11-06 Thread GitBox
leezu opened a new issue #16747: Fused Op causes MXNetError URL: https://github.com/apache/incubator-mxnet/issues/16747 ## Description After https://github.com/apache/incubator-mxnet/pull/15167 is merged, GluonNLP CI broke. ### Error Message ``` [2019-11-06T06:44:48.223Z]

[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16737: [MKLDNN] use dim_t instead of int in slice/transpose operators

2019-11-06 Thread GitBox
pengzhao-intel commented on issue #16737: [MKLDNN] use dim_t instead of int in slice/transpose operators URL: https://github.com/apache/incubator-mxnet/pull/16737#issuecomment-550577956 @access2rohit did you have a chance to try this patch?

[GitHub] [incubator-mxnet] sxjscience commented on issue #15167: Pointwise fusion for GPU

2019-11-06 Thread GitBox
sxjscience commented on issue #15167: Pointwise fusion for GPU URL: https://github.com/apache/incubator-mxnet/pull/15167#issuecomment-550573105 @ptrendx Okay, I just think that we may need more time to test for the cases in GluonNLP and GluonCV.

[GitHub] [incubator-mxnet] ptrendx commented on issue #15167: Pointwise fusion for GPU

2019-11-06 Thread GitBox
ptrendx commented on issue #15167: Pointwise fusion for GPU URL: https://github.com/apache/incubator-mxnet/pull/15167#issuecomment-550568022 I would say it is expected - the whole point of the feature is compile the portion of the model (which is much more expensive than just running that

[GitHub] [incubator-mxnet] ZhennanQin commented on a change in pull request #16734: [MKLDNN] Fix int8 convolution/fc bias overflow

2019-11-06 Thread GitBox
ZhennanQin commented on a change in pull request #16734: [MKLDNN] Fix int8 convolution/fc bias overflow URL: https://github.com/apache/incubator-mxnet/pull/16734#discussion_r343402064 ## File path: src/operator/subgraph/mkldnn/mkldnn_fc.cc ## @@ -143,18 +144,34 @@ void

[GitHub] [incubator-mxnet] larroy opened a new pull request #16746: [docs] Fix runtime feature detection documentation

2019-11-06 Thread GitBox
larroy opened a new pull request #16746: [docs] Fix runtime feature detection documentation URL: https://github.com/apache/incubator-mxnet/pull/16746 ## Description ## as title. Documentation was not appearing in the index nor sidebar. Adds usage example to mxnet.runtime

[GitHub] [incubator-mxnet] sxjscience opened a new issue #16745: [Numpy] Cannot print the numpy scalar with format string

2019-11-06 Thread GitBox
sxjscience opened a new issue #16745: [Numpy] Cannot print the numpy scalar with format string URL: https://github.com/apache/incubator-mxnet/issues/16745 ```python import mxnet as mx mx.npx.set_np() a = mx.np.array(1.0) print('{:2f}'.format(a)) ``` Error message: ```

[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #16184: Add large tensor nightly tests for MKL-DNN operators

2019-11-06 Thread GitBox
ChaiBapchya commented on a change in pull request #16184: Add large tensor nightly tests for MKL-DNN operators URL: https://github.com/apache/incubator-mxnet/pull/16184#discussion_r343352488 ## File path: tests/nightly/test_large_array.py ## @@ -782,8 +801,30 @@ def

[GitHub] [incubator-mxnet] larroy commented on issue #16412: Cleanup output of docker cache generation

2019-11-06 Thread GitBox
larroy commented on issue #16412: Cleanup output of docker cache generation URL: https://github.com/apache/incubator-mxnet/pull/16412#issuecomment-550507141 @marcoabreu could you have a look again at this PR? the changes were separated as you requested. Thanks.

[GitHub] [incubator-mxnet] sxjscience closed issue #16650: [Bug][Numpy] Cannot expand_dims of bool array

2019-11-06 Thread GitBox
sxjscience closed issue #16650: [Bug][Numpy] Cannot expand_dims of bool array URL: https://github.com/apache/incubator-mxnet/issues/16650 This is an automated message from the Apache Git Service. To respond to the message,

[GitHub] [incubator-mxnet] DickJC123 commented on issue #15167: Pointwise fusion for GPU

2019-11-06 Thread GitBox
DickJC123 commented on issue #15167: Pointwise fusion for GPU URL: https://github.com/apache/incubator-mxnet/pull/15167#issuecomment-550494104 Thanks for pointing out this large perf change. I will investigate. This is an

[GitHub] [incubator-mxnet] haojin2 opened a new pull request #16744: Numpy-compatible gcd operator

2019-11-06 Thread GitBox
haojin2 opened a new pull request #16744: Numpy-compatible gcd operator URL: https://github.com/apache/incubator-mxnet/pull/16744 ## Description ## https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.gcd.html?highlight=gcd#numpy.gcd ## Checklist ## ### Essentials

[GitHub] [incubator-mxnet] anirudh2290 commented on issue #16742: Update submodule dmlc-core

2019-11-06 Thread GitBox
anirudh2290 commented on issue #16742: Update submodule dmlc-core URL: https://github.com/apache/incubator-mxnet/pull/16742#issuecomment-550483060 preferrably this should also go in 1.6 if we are still taking bug fixes. This

[GitHub] [incubator-mxnet] sxjscience commented on issue #16743: [Numpy] Cannot mix numpy ndarray and MXNet numpy ndarray

2019-11-06 Thread GitBox
sxjscience commented on issue #16743: [Numpy] Cannot mix numpy ndarray and MXNet numpy ndarray URL: https://github.com/apache/incubator-mxnet/issues/16743#issuecomment-550478097 This will be fine: ```python import mxnet as mx import numpy as np mx.npx.set_np() a = 1 a +=

[GitHub] [incubator-mxnet] sxjscience opened a new issue #16743: [Numpy]

2019-11-06 Thread GitBox
sxjscience opened a new issue #16743: [Numpy] URL: https://github.com/apache/incubator-mxnet/issues/16743 Mixing the original numpy array and the mxnet numpy array will trigger some errors: Minimal reproducible example: ```python import mxnet as mx import numpy as np

[incubator-mxnet] branch benchmark updated (579b9dd -> c4580ae)

2019-11-06 Thread apeforest
This is an automated email from the ASF dual-hosted git repository. apeforest pushed a change to branch benchmark in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 579b9dd fix another typo add c4580ae Fix for wrong reqs set after switching from training to

[incubator-mxnet] branch master updated (3c404a5 -> 58b824f)

2019-11-06 Thread thomasdelteil
This is an automated email from the ASF dual-hosted git repository. thomasdelteil pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 3c404a5 Mixed data type binary ops (#16699) add 58b824f fix R docs (#16733) No new revisions

[GitHub] [incubator-mxnet] DickJC123 opened a new pull request #16742: Update submodule dmlc-core

2019-11-06 Thread GitBox
DickJC123 opened a new pull request #16742: Update submodule dmlc-core URL: https://github.com/apache/incubator-mxnet/pull/16742 ## Description ## This PR advances the 3rdparty dmlc-core submodule by the following 2 commits: ``` ca9f932 2019-11-05 Dick CarterFix

[GitHub] [incubator-mxnet] ThomasDelteil merged pull request #16733: fix R docs

2019-11-06 Thread GitBox
ThomasDelteil merged pull request #16733: fix R docs URL: https://github.com/apache/incubator-mxnet/pull/16733 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and

[GitHub] [incubator-mxnet] Caenorst commented on issue #16408: Add MXNet Ops for fast multihead attention

2019-11-06 Thread GitBox
Caenorst commented on issue #16408: Add MXNet Ops for fast multihead attention URL: https://github.com/apache/incubator-mxnet/pull/16408#issuecomment-550456725 @TaoLV: 1) The current design are two separated Ops which represent each matrix multiplication part of the multihead attention

[incubator-mxnet] branch benchmark updated (c0560fc -> 579b9dd)

2019-11-06 Thread apeforest
This is an automated email from the ASF dual-hosted git repository. apeforest pushed a change to branch benchmark in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from c0560fc add log message and TODO add 77beeb6 add cutlass as 3rdparty dependency add

[incubator-mxnet] branch benchmark updated (c0560fc -> 579b9dd)

2019-11-06 Thread apeforest
This is an automated email from the ASF dual-hosted git repository. apeforest pushed a change to branch benchmark in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from c0560fc add log message and TODO add 77beeb6 add cutlass as 3rdparty dependency add

[incubator-mxnet] branch benchmark updated (c0560fc -> 579b9dd)

2019-11-06 Thread apeforest
This is an automated email from the ASF dual-hosted git repository. apeforest pushed a change to branch benchmark in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from c0560fc add log message and TODO add 77beeb6 add cutlass as 3rdparty dependency add

[incubator-mxnet] branch benchmark updated (c0560fc -> 579b9dd)

2019-11-06 Thread apeforest
This is an automated email from the ASF dual-hosted git repository. apeforest pushed a change to branch benchmark in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from c0560fc add log message and TODO add 77beeb6 add cutlass as 3rdparty dependency add

[incubator-mxnet] branch benchmark updated (c0560fc -> 579b9dd)

2019-11-06 Thread apeforest
This is an automated email from the ASF dual-hosted git repository. apeforest pushed a change to branch benchmark in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from c0560fc add log message and TODO add 77beeb6 add cutlass as 3rdparty dependency add

[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-11-06 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 718f37e Bump the

[GitHub] [incubator-mxnet] sshearn commented on issue #16620: Incompatible input shape

2019-11-06 Thread GitBox
sshearn commented on issue #16620: Incompatible input shape URL: https://github.com/apache/incubator-mxnet/issues/16620#issuecomment-550442553 @lanking520 Any updates here? It's a huge blocker for me. Thanks. This is an

[GitHub] [incubator-mxnet] nikudyshko opened a new issue #16741: Error detecting C++11 and C++14

2019-11-06 Thread GitBox
nikudyshko opened a new issue #16741: Error detecting C++11 and C++14 URL: https://github.com/apache/incubator-mxnet/issues/16741 ## Description During configuring process, I've noticed Cmake reporting failures when trying to detect C++11 support. If `USE_CXX14_IF_AVAILABLE` is enabled

[GitHub] [incubator-mxnet] slavah commented on issue #11458: Multithreading error.

2019-11-06 Thread GitBox
slavah commented on issue #11458: Multithreading error. URL: https://github.com/apache/incubator-mxnet/issues/11458#issuecomment-550377076 Anyone found solution for this issue? This is an automated message from the Apache

[GitHub] [incubator-mxnet] MicKot commented on issue #16591: Module.predict() produces only one output meanwhile Module.forward() and then Module.get_outputs() creates multiple (as it should)

2019-11-06 Thread GitBox
MicKot commented on issue #16591: Module.predict() produces only one output meanwhile Module.forward() and then Module.get_outputs() creates multiple (as it should) URL: https://github.com/apache/incubator-mxnet/issues/16591#issuecomment-550334560 I guess it's my only option :)

[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-11-06 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 2ba04c9 Bump the

[GitHub] [incubator-mxnet] ciyongch commented on a change in pull request #16734: [MKLDNN] Fix int8 convolution/fc bias overflow

2019-11-06 Thread GitBox
ciyongch commented on a change in pull request #16734: [MKLDNN] Fix int8 convolution/fc bias overflow URL: https://github.com/apache/incubator-mxnet/pull/16734#discussion_r343066541 ## File path: src/operator/subgraph/mkldnn/mkldnn_fc.cc ## @@ -143,18 +144,34 @@ void

[GitHub] [incubator-mxnet] canteen-man opened a new issue #16740: Whether the assert about the image is not continous can be added in the iter_image_recordio_2.cc

2019-11-06 Thread GitBox
canteen-man opened a new issue #16740: Whether the assert about the image is not continous can be added in the iter_image_recordio_2.cc URL: https://github.com/apache/incubator-mxnet/issues/16740 ## Description I meet an error because the image in my test dataset is damaged.And when I

[GitHub] [incubator-mxnet] haojin2 opened a new issue #16739: Flaky test: test_higher_order_grad.test_arctan

2019-11-06 Thread GitBox
haojin2 opened a new issue #16739: Flaky test: test_higher_order_grad.test_arctan URL: https://github.com/apache/incubator-mxnet/issues/16739 http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fcentos-cpu/detail/PR-16728/2/pipeline ```

[GitHub] [incubator-mxnet] BogdanovKirill commented on issue #11458: Multithreading error.

2019-11-06 Thread GitBox
BogdanovKirill commented on issue #11458: Multithreading error. URL: https://github.com/apache/incubator-mxnet/issues/11458#issuecomment-550233819 Same issues on v1.5. Did someone find a solution to the problem? This is an

[GitHub] [incubator-mxnet] igor-byel edited a comment on issue #15275: How to run mxnet(C++) in single-thread mode?

2019-11-06 Thread GitBox
igor-byel edited a comment on issue #15275: How to run mxnet(C++) in single-thread mode? URL: https://github.com/apache/incubator-mxnet/issues/15275#issuecomment-550218715 > ``` > compile mxnet with OPENMP=0 > export OMP_NUM_THREADS=1 > export MXNET_ENGINE_TYPE=NaiveEngine >

[GitHub] [incubator-mxnet] igor-byel commented on issue #15275: How to run mxnet(C++) in single-thread mode?

2019-11-06 Thread GitBox
igor-byel commented on issue #15275: How to run mxnet(C++) in single-thread mode? URL: https://github.com/apache/incubator-mxnet/issues/15275#issuecomment-550218715 > ``` > compile mxnet with OPENMP=0 > export OMP_NUM_THREADS=1 > export MXNET_ENGINE_TYPE=NaiveEngine > ```

[GitHub] [incubator-mxnet] yajiedesign opened a new pull request #16738: Fix encode write error with windows.

2019-11-06 Thread GitBox
yajiedesign opened a new pull request #16738: Fix encode write error with windows. URL: https://github.com/apache/incubator-mxnet/pull/16738 change _generate_op_module_signature get_module_file open with encoding="utf-8",it fix some encode error in Chinese windows system.

[GitHub] [incubator-mxnet] wuxun-zhang edited a comment on issue #16732: MKLDNN-1.0 doesn't support slice operator for Large Tensor

2019-11-06 Thread GitBox
wuxun-zhang edited a comment on issue #16732: MKLDNN-1.0 doesn't support slice operator for Large Tensor URL: https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-550191232 You can try to use `export MKLDNN_VERBOSE=1` to get these logs. Also I just filed a [PR

[GitHub] [incubator-mxnet] wuxun-zhang commented on issue #16737: [MKLDNN] use dim_t instead of int in slice/transpose operators

2019-11-06 Thread GitBox
wuxun-zhang commented on issue #16737: [MKLDNN] use dim_t instead of int in slice/transpose operators URL: https://github.com/apache/incubator-mxnet/pull/16737#issuecomment-550203076 @ZhennanQin Sure. I will do that. This

[incubator-mxnet] branch benchmark updated (bfea509 -> c0560fc)

2019-11-06 Thread apeforest
This is an automated email from the ASF dual-hosted git repository. apeforest pushed a change to branch benchmark in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from bfea509 Embedding gradient performance optimization on GPU (#16355) add c0560fc add log

[GitHub] [incubator-mxnet] ZhennanQin commented on issue #16737: [MKLDNN] use dim_t instead of int in slice/transpose operators

2019-11-06 Thread GitBox
ZhennanQin commented on issue #16737: [MKLDNN] use dim_t instead of int in slice/transpose operators URL: https://github.com/apache/incubator-mxnet/pull/16737#issuecomment-550195380 Nice catch! Could you please review all mkldnn supported ops to see if any other implementations have same

[incubator-mxnet] branch benchmark updated (82ed82f -> bfea509)

2019-11-06 Thread apeforest
This is an automated email from the ASF dual-hosted git repository. apeforest pushed a change to branch benchmark in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 82ed82f Aggregated zero grad (#16446) add bfea509 Embedding gradient performance optimization

[incubator-mxnet] branch benchmark updated (8c22fac -> 82ed82f)

2019-11-06 Thread apeforest
This is an automated email from the ASF dual-hosted git repository. apeforest pushed a change to branch benchmark in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 8c22fac Aggregated adamw update (#16398) add 82ed82f Aggregated zero grad (#16446) No new

[incubator-mxnet] branch benchmark updated (0415a2f -> 8c22fac)

2019-11-06 Thread apeforest
This is an automated email from the ASF dual-hosted git repository. apeforest pushed a change to branch benchmark in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 0415a2f Eliminate common expressions (#15657) add 8c22fac Aggregated adamw update (#16398) No