[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-03-12 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new b492411 Bump the publis

[GitHub] [incubator-mxnet] anirudhacharya commented on issue #17734: [MXNET-889] Implement ONNX export for gluon LSTM.

2020-03-12 Thread GitBox
anirudhacharya commented on issue #17734: [MXNET-889] Implement ONNX export for gluon LSTM. URL: https://github.com/apache/incubator-mxnet/pull/17734#issuecomment-598573803 > I could add a test case to [test_node.py export_test_cases](https://github.com/apache/incubator-mxnet/blob/713d9623

[incubator-mxnet] branch master updated (18c2a26 -> 34010ea)

2020-03-12 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository. zhasheng pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 18c2a26 [numpy] add magic methods for symbol bitwise ops (#17807) add 34010ea [CD] switch CD_RELEAS

[incubator-mxnet] branch master updated (18c2a26 -> 34010ea)

2020-03-12 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository. zhasheng pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 18c2a26 [numpy] add magic methods for symbol bitwise ops (#17807) add 34010ea [CD] switch CD_RELEAS

[GitHub] [incubator-mxnet] szha merged pull request #17775: [CD] switch CD_RELEASE_JOB_NAME from global env var to job argument

2020-03-12 Thread GitBox
szha merged pull request #17775: [CD] switch CD_RELEASE_JOB_NAME from global env var to job argument URL: https://github.com/apache/incubator-mxnet/pull/17775 This is an automated message from the Apache Git Service. To resp

[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops

2020-03-12 Thread GitBox
hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops URL: https://github.com/apache/incubator-mxnet/pull/17795#discussion_r392038428 ## File path: src/api/operator/numpy/linalg/np_pinv.cc ## @@ -0,0 +1,74 @@ +/* + * Licensed to the Apache Software

[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #17811: add ffi full_like, binary ops, benchmark test

2020-03-12 Thread GitBox
hzfan commented on a change in pull request #17811: add ffi full_like, binary ops, benchmark test URL: https://github.com/apache/incubator-mxnet/pull/17811#discussion_r392036401 ## File path: src/operator/tensor/init_op.h ## @@ -105,6 +105,13 @@ struct FullLikeOpParam : pu

[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #17811: add ffi full_like, binary ops, benchmark test

2020-03-12 Thread GitBox
hzfan commented on a change in pull request #17811: add ffi full_like, binary ops, benchmark test URL: https://github.com/apache/incubator-mxnet/pull/17811#discussion_r392034583 ## File path: python/mxnet/ndarray/numpy/_op.py ## @@ -1025,8 +1023,9 @@ def subtract(x1, x2, o

[GitHub] [incubator-mxnet] samskalicky commented on issue #17569: Adding sparse support to MXTensor for custom operators

2020-03-12 Thread GitBox
samskalicky commented on issue #17569: Adding sparse support to MXTensor for custom operators URL: https://github.com/apache/incubator-mxnet/pull/17569#issuecomment-598550796 Good stuff @guanxinq ! I think we're really close the finish line. I only had small suggestions. Overall strategy l

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-12 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392031236 ## File path: example/extensions/lib_custom_op/transcsr_lib.cc ## @@ -0,0 +1,195

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-12 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392031386 ## File path: example/extensions/lib_custom_op/transcsr_lib.cc ## @@ -0,0 +1,195

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-12 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392030868 ## File path: example/extensions/lib_custom_op/transcsr_lib.cc ## @@ -0,0 +1,195

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-12 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392030429 ## File path: example/extensions/lib_custom_op/transcsr_lib.cc ## @@ -0,0 +1,195

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-12 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392026937 ## File path: include/mxnet/lib_api.h ## @@ -214,6 +214,18 @@ enum MXDType {

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-12 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392026937 ## File path: include/mxnet/lib_api.h ## @@ -214,6 +214,18 @@ enum MXDType {

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-12 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392023970 ## File path: src/c_api/c_api.cc ## @@ -572,12 +645,30 @@ int MXLoadLib(const ch

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-12 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392022581 ## File path: src/c_api/c_api.cc ## @@ -162,6 +195,24 @@ void CustomFComputeDisp

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-12 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392022241 ## File path: src/c_api/c_api.cc ## @@ -341,6 +412,8 @@ int MXLoadLib(const char

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-12 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392021465 ## File path: src/c_api/c_api.cc ## @@ -178,6 +229,13 @@ void CustomFComputeDisp

[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops

2020-03-12 Thread GitBox
hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops URL: https://github.com/apache/incubator-mxnet/pull/17795#discussion_r392019600 ## File path: python/mxnet/numpy/linalg.py ## @@ -232,7 +232,7 @@ def svd(a): return _mx_nd_np.linalg.svd(a)

[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops

2020-03-12 Thread GitBox
hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops URL: https://github.com/apache/incubator-mxnet/pull/17795#discussion_r392016315 ## File path: src/api/operator/numpy/linalg/np_eigvals.cc ## @@ -0,0 +1,48 @@ +/* + * Licensed to the Apache Softwa

[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops

2020-03-12 Thread GitBox
hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops URL: https://github.com/apache/incubator-mxnet/pull/17795#discussion_r392018516 ## File path: src/api/operator/numpy/linalg/np_tensorinv.cc ## @@ -0,0 +1,48 @@ +/* + * Licensed to the Apache Soft

[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops

2020-03-12 Thread GitBox
hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops URL: https://github.com/apache/incubator-mxnet/pull/17795#discussion_r392016637 ## File path: src/api/operator/numpy/linalg/np_pinv.cc ## @@ -0,0 +1,72 @@ +/* + * Licensed to the Apache Software

[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops

2020-03-12 Thread GitBox
hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops URL: https://github.com/apache/incubator-mxnet/pull/17795#discussion_r392016699 ## File path: src/api/operator/numpy/linalg/np_pinv.cc ## @@ -0,0 +1,72 @@ +/* + * Licensed to the Apache Software

[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops

2020-03-12 Thread GitBox
hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops URL: https://github.com/apache/incubator-mxnet/pull/17795#discussion_r392016917 ## File path: src/api/operator/numpy/linalg/np_potrf.cc ## @@ -0,0 +1,48 @@ +/* + * Licensed to the Apache Software

[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops

2020-03-12 Thread GitBox
hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops URL: https://github.com/apache/incubator-mxnet/pull/17795#discussion_r392019068 ## File path: src/api/operator/numpy/linalg/np_tensorsolve.cc ## @@ -0,0 +1,56 @@ +/* + * Licensed to the Apache So

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-12 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392019355 ## File path: include/mxnet/lib_api.h ## @@ -1091,6 +1187,43 @@ extern "C" {

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-12 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392018268 ## File path: include/mxnet/lib_api.h ## @@ -393,13 +458,21 @@ class OpResource

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-12 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392017949 ## File path: include/mxnet/lib_api.h ## @@ -393,13 +458,21 @@ class OpResource

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-12 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392017757 ## File path: include/mxnet/lib_api.h ## @@ -393,13 +458,21 @@ class OpResource

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-12 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392017457 ## File path: include/mxnet/lib_api.h ## @@ -378,6 +436,13 @@ class OpResource {

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-12 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392016625 ## File path: include/mxnet/lib_api.h ## @@ -229,20 +241,60 @@ enum MXReturnValu

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators

2020-03-12 Thread GitBox
samskalicky commented on a change in pull request #17569: Adding sparse support to MXTensor for custom operators URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392012930 ## File path: example/extensions/lib_custom_op/gemm_lib.cc ## @@ -140,6 +140,22

[GitHub] [incubator-mxnet] wuxun-zhang edited a comment on issue #17231: cannot quantization example

2020-03-12 Thread GitBox
wuxun-zhang edited a comment on issue #17231: cannot quantization example URL: https://github.com/apache/incubator-mxnet/issues/17231#issuecomment-598524032 @venkat-kittu I have just provided a patch here https://github.com/wuxun-zhang/incubator-mxnet/commit/c06a715985dc1db3ae2e65227bf2ef4

[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #17817: [Numpy] FFI for np_where

2020-03-12 Thread GitBox
hzfan commented on a change in pull request #17817: [Numpy] FFI for np_where URL: https://github.com/apache/incubator-mxnet/pull/17817#discussion_r392008301 ## File path: python/mxnet/ndarray/numpy/_op.py ## @@ -7482,14 +7482,8 @@ def where(condition, x=None, y=None): # py

[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #17817: [Numpy] FFI for np_where

2020-03-12 Thread GitBox
hzfan commented on a change in pull request #17817: [Numpy] FFI for np_where URL: https://github.com/apache/incubator-mxnet/pull/17817#discussion_r392008301 ## File path: python/mxnet/ndarray/numpy/_op.py ## @@ -7482,14 +7482,8 @@ def where(condition, x=None, y=None): # py

[GitHub] [incubator-mxnet] wuxun-zhang commented on issue #17231: cannot quantization example

2020-03-12 Thread GitBox
wuxun-zhang commented on issue #17231: cannot quantization example URL: https://github.com/apache/incubator-mxnet/issues/17231#issuecomment-598524032 @venkat-kittu I have just provided a patch here https://github.com/wuxun-zhang/incubator-mxnet/commit/37931b624dfdb75fc0a0090a813833632e47cc

[GitHub] [incubator-mxnet] hkvision commented on issue #17822: [Question] Distributed training performance for one worker and one server on the same node

2020-03-12 Thread GitBox
hkvision commented on issue #17822: [Question] Distributed training performance for one worker and one server on the same node URL: https://github.com/apache/incubator-mxnet/issues/17822#issuecomment-598521256 > You probably want to check out https://medium.com/apache-mxnet/intel-mlsl-mak

[GitHub] [incubator-mxnet] zixuanweeei commented on issue #17818: RNN operator produces inconsistent gradients for h2h_bias for stacked RNNs

2020-03-12 Thread GitBox
zixuanweeei commented on issue #17818: RNN operator produces inconsistent gradients for h2h_bias for stacked RNNs URL: https://github.com/apache/incubator-mxnet/issues/17818#issuecomment-598520346 > @zixuanweeei Thank you for your fix. I verified that it does fix the issue in CPU. We veri

[GitHub] [incubator-mxnet] keerthanvasist commented on issue #17818: RNN operator produces inconsistent gradients for h2h_bias for stacked RNNs

2020-03-12 Thread GitBox
keerthanvasist commented on issue #17818: RNN operator produces inconsistent gradients for h2h_bias for stacked RNNs URL: https://github.com/apache/incubator-mxnet/issues/17818#issuecomment-598512194 @zixuanweeei Thank you for your fix. I verified that it does fix the issue in CPU. We ver

[incubator-mxnet] branch master updated: [numpy] add magic methods for symbol bitwise ops (#17807)

2020-03-12 Thread haoj
This is an automated email from the ASF dual-hosted git repository. haoj pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new 18c2a26 [numpy] add magic methods for sy

[GitHub] [incubator-mxnet] haojin2 merged pull request #17807: [numpy] add magic methods for symbol bitwise ops

2020-03-12 Thread GitBox
haojin2 merged pull request #17807: [numpy] add magic methods for symbol bitwise ops URL: https://github.com/apache/incubator-mxnet/pull/17807 This is an automated message from the Apache Git Service. To respond to the messa

[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-03-12 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new a887966 Bump the publis

[GitHub] [incubator-mxnet] haojin2 commented on issue #17385: [NumPy] add random.geometric op

2020-03-12 Thread GitBox
haojin2 commented on issue #17385: [NumPy] add random.geometric op URL: https://github.com/apache/incubator-mxnet/pull/17385#issuecomment-598480020 @hzfan Can you also take a look at the FFI parts? This is an automated message

[GitHub] [incubator-mxnet] haojin2 commented on issue #17385: [NumPy] add random.geometric op

2020-03-12 Thread GitBox
haojin2 commented on issue #17385: [NumPy] add random.geometric op URL: https://github.com/apache/incubator-mxnet/pull/17385#issuecomment-598479958 @AntiZpvoh Please address all comments and also resolve the conflicts. This is

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17385: [NumPy] add random.geometric op

2020-03-12 Thread GitBox
haojin2 commented on a change in pull request #17385: [NumPy] add random.geometric op URL: https://github.com/apache/incubator-mxnet/pull/17385#discussion_r391957825 ## File path: src/operator/numpy/random/np_geometric_op.h ## @@ -0,0 +1,185 @@ +/* + * Licensed to the Apac

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17385: [NumPy] add random.geometric op

2020-03-12 Thread GitBox
haojin2 commented on a change in pull request #17385: [NumPy] add random.geometric op URL: https://github.com/apache/incubator-mxnet/pull/17385#discussion_r391957740 ## File path: src/operator/numpy/random/np_geometric_op.h ## @@ -0,0 +1,177 @@ +/* + * Licensed to the Apac

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17385: [NumPy] add random.geometric op

2020-03-12 Thread GitBox
haojin2 commented on a change in pull request #17385: [NumPy] add random.geometric op URL: https://github.com/apache/incubator-mxnet/pull/17385#discussion_r391957920 ## File path: src/operator/numpy/random/np_geometric_op.h ## @@ -0,0 +1,185 @@ +/* + * Licensed to the Apac

[GitHub] [incubator-mxnet] sxjscience opened a new issue #17823: [Operator] Add `index_add` or `index_update` to numpy extension

2020-03-12 Thread GitBox
sxjscience opened a new issue #17823: [Operator] Add `index_add` or `index_update` to numpy extension URL: https://github.com/apache/incubator-mxnet/issues/17823 We need the functionality to calculate `b = index_add(a, indices, value)`, which mimics the outcome of `a[indices] += value`.

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17323: [Numpy] Kron operator

2020-03-12 Thread GitBox
haojin2 commented on a change in pull request #17323: [Numpy] Kron operator URL: https://github.com/apache/incubator-mxnet/pull/17323#discussion_r391946230 ## File path: src/operator/numpy/np_kron-inl.h ## @@ -0,0 +1,261 @@ +/* + * Licensed to the Apache Software Foundation

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17567: [Numpy] Add op fmax, fmin, fmod

2020-03-12 Thread GitBox
haojin2 commented on a change in pull request #17567: [Numpy] Add op fmax, fmin, fmod URL: https://github.com/apache/incubator-mxnet/pull/17567#discussion_r391943282 ## File path: python/mxnet/ndarray/numpy/_op.py ## @@ -1164,6 +1165,34 @@ def mod(x1, x2, out=None, **kwarg

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17567: [Numpy] Add op fmax, fmin, fmod

2020-03-12 Thread GitBox
haojin2 commented on a change in pull request #17567: [Numpy] Add op fmax, fmin, fmod URL: https://github.com/apache/incubator-mxnet/pull/17567#discussion_r391943379 ## File path: python/mxnet/ndarray/numpy/_op.py ## @@ -4367,6 +4396,26 @@ def maximum(x1, x2, out=None, **k

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17567: [Numpy] Add op fmax, fmin, fmod

2020-03-12 Thread GitBox
haojin2 commented on a change in pull request #17567: [Numpy] Add op fmax, fmin, fmod URL: https://github.com/apache/incubator-mxnet/pull/17567#discussion_r391943076 ## File path: python/mxnet/symbol/numpy/_symbol.py ## @@ -4156,6 +4171,11 @@ def any(a, axis=None, out=None

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17759: [numpy] FFI for insert \ delete \ matmul etc.

2020-03-12 Thread GitBox
haojin2 commented on a change in pull request #17759: [numpy] FFI for insert \ delete \ matmul etc. URL: https://github.com/apache/incubator-mxnet/pull/17759#discussion_r391937750 ## File path: src/operator/tensor/init_op.h ## @@ -215,6 +215,21 @@ struct RangeParam : publi

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17759: [numpy] FFI for insert \ delete \ matmul etc.

2020-03-12 Thread GitBox
haojin2 commented on a change in pull request #17759: [numpy] FFI for insert \ delete \ matmul etc. URL: https://github.com/apache/incubator-mxnet/pull/17759#discussion_r391936815 ## File path: src/operator/tensor/init_op.h ## @@ -215,6 +215,21 @@ struct RangeParam : publi

[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #17822: [Question] Distributed training performance for one worker and one server on the same node

2020-03-12 Thread GitBox
eric-haibin-lin commented on issue #17822: [Question] Distributed training performance for one worker and one server on the same node URL: https://github.com/apache/incubator-mxnet/issues/17822#issuecomment-598399435 You probably want to check out https://medium.com/apache-mxnet/intel-mls

[GitHub] [incubator-mxnet] mk-61 edited a comment on issue #16173: Saving and loading cudNN autotune and graph optimization

2020-03-12 Thread GitBox
mk-61 edited a comment on issue #16173: Saving and loading cudNN autotune and graph optimization URL: https://github.com/apache/incubator-mxnet/issues/16173#issuecomment-598392266 How about adding a method to Symbol, to calculate certain aspects of a model? In C API it would look somethin

[GitHub] [incubator-mxnet] mk-61 commented on issue #16173: Saving and loading cudNN autotune and graph optimization

2020-03-12 Thread GitBox
mk-61 commented on issue #16173: Saving and loading cudNN autotune and graph optimization URL: https://github.com/apache/incubator-mxnet/issues/16173#issuecomment-598392266 How about adding a method to Symbol, to calculate certain aspects of a model? In C API it would look something like:

[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-03-12 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new defa8c3 Bump the publis

[GitHub] [incubator-mxnet] RuRo commented on issue #17734: [MXNET-889] Implement ONNX export for gluon LSTM.

2020-03-12 Thread GitBox
RuRo commented on issue #17734: [MXNET-889] Implement ONNX export for gluon LSTM. URL: https://github.com/apache/incubator-mxnet/pull/17734#issuecomment-598346865 The CI seems to be borked again btw 😩 This is an automated mes

[incubator-mxnet] branch master updated (bd6e917 -> dfbcf6f)

2020-03-12 Thread haoj
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from bd6e917 [numpy] add op random.f (#17586) add dfbcf6f fix np.clip scalar input case (#17788) No new rev

[incubator-mxnet] branch master updated (bd6e917 -> dfbcf6f)

2020-03-12 Thread haoj
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from bd6e917 [numpy] add op random.f (#17586) add dfbcf6f fix np.clip scalar input case (#17788) No new rev

[GitHub] [incubator-mxnet] haojin2 merged pull request #17788: [Numpy] Fix np.clip in scalar case

2020-03-12 Thread GitBox
haojin2 merged pull request #17788: [Numpy] Fix np.clip in scalar case URL: https://github.com/apache/incubator-mxnet/pull/17788 This is an automated message from the Apache Git Service. To respond to the message, please log

[GitHub] [incubator-mxnet] haojin2 closed issue #17787: [Numpy] np.clip does not support scalar

2020-03-12 Thread GitBox
haojin2 closed issue #17787: [Numpy] np.clip does not support scalar URL: https://github.com/apache/incubator-mxnet/issues/17787 This is an automated message from the Apache Git Service. To respond to the message, please log

[GitHub] [incubator-mxnet] bricksdont edited a comment on issue #7375: Can I set instance weight when training?

2020-03-12 Thread GitBox
bricksdont edited a comment on issue #7375: Can I set instance weight when training? URL: https://github.com/apache/incubator-mxnet/issues/7375#issuecomment-598234055 Here is a gist with an actual implementation of batch-weighted cross-entropy loss that I believe can replace the default `

[GitHub] [incubator-mxnet] bricksdont commented on issue #7375: Can I set instance weight when training?

2020-03-12 Thread GitBox
bricksdont commented on issue #7375: Can I set instance weight when training? URL: https://github.com/apache/incubator-mxnet/issues/7375#issuecomment-598234055 Here is a gist with an actual implementation of batch-weighted cross-entropy loss that I believe can replace the default `SoftmaxO

[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-03-12 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new a805766 Bump the publis

[GitHub] [incubator-mxnet] hkvision opened a new issue #17822: [Question] Distributed training performance for one worker and one server on the same node

2020-03-12 Thread GitBox
hkvision opened a new issue #17822: [Question] Distributed training performance for one worker and one server on the same node URL: https://github.com/apache/incubator-mxnet/issues/17822 Hi, I’m referring to this page https://mxnet.apache.org/api/faq/distributed_training.html for di

[GitHub] [incubator-mxnet] aGiant commented on issue #17814: mxnet.gluon.data.vision.transforms.Normalize(mean=0.0, std=1.0) tuple issue within hybird_forward()

2020-03-12 Thread GitBox
aGiant commented on issue #17814: mxnet.gluon.data.vision.transforms.Normalize(mean=0.0, std=1.0) tuple issue within hybird_forward() URL: https://github.com/apache/incubator-mxnet/issues/17814#issuecomment-598112586 > For the first example, I noticed that you are defining both `hybrid_f

[incubator-mxnet] branch master updated (713d962 -> bd6e917)

2020-03-12 Thread haoj
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 713d962 [Numpy] FFI: Bincount, Percentile/Quantile, All/Any (#17717) add bd6e917 [numpy] add op random.

[incubator-mxnet] branch master updated (713d962 -> bd6e917)

2020-03-12 Thread haoj
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 713d962 [Numpy] FFI: Bincount, Percentile/Quantile, All/Any (#17717) add bd6e917 [numpy] add op random.

[incubator-mxnet] branch master updated: [numpy] add op random.f (#17586)

2020-03-12 Thread haoj
This is an automated email from the ASF dual-hosted git repository. haoj pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new bd6e917 [numpy] add op random.f (#17586)

[GitHub] [incubator-mxnet] haojin2 merged pull request #17586: [numpy] add op random.f

2020-03-12 Thread GitBox
haojin2 merged pull request #17586: [numpy] add op random.f URL: https://github.com/apache/incubator-mxnet/pull/17586 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHu

[GitHub] [incubator-mxnet] aGiant commented on issue #17814: mxnet.gluon.data.vision.transforms.Normalize(mean=0.0, std=1.0) tuple issue within hybird_forward()

2020-03-12 Thread GitBox
aGiant commented on issue #17814: mxnet.gluon.data.vision.transforms.Normalize(mean=0.0, std=1.0) tuple issue within hybird_forward() URL: https://github.com/apache/incubator-mxnet/issues/17814#issuecomment-598071179 > For the first example, I noticed that you are defining both `hybrid_f

[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #17779: [Numpy] FFI Invocation for Unary Ops

2020-03-12 Thread GitBox
hzfan commented on a change in pull request #17779: [Numpy] FFI Invocation for Unary Ops URL: https://github.com/apache/incubator-mxnet/pull/17779#discussion_r391449589 ## File path: python/mxnet/ndarray/numpy/_op.py ## @@ -1933,7 +1933,7 @@ def _unary_func_helper(x, fn_ar