This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new b492411 Bump the publis
anirudhacharya commented on issue #17734: [MXNET-889] Implement ONNX export for
gluon LSTM.
URL: https://github.com/apache/incubator-mxnet/pull/17734#issuecomment-598573803
> I could add a test case to [test_node.py
export_test_cases](https://github.com/apache/incubator-mxnet/blob/713d9623
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 18c2a26 [numpy] add magic methods for symbol bitwise ops (#17807)
add 34010ea [CD] switch CD_RELEAS
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 18c2a26 [numpy] add magic methods for symbol bitwise ops (#17807)
add 34010ea [CD] switch CD_RELEAS
szha merged pull request #17775: [CD] switch CD_RELEASE_JOB_NAME from global
env var to job argument
URL: https://github.com/apache/incubator-mxnet/pull/17775
This is an automated message from the Apache Git Service.
To resp
hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops
URL: https://github.com/apache/incubator-mxnet/pull/17795#discussion_r392038428
##
File path: src/api/operator/numpy/linalg/np_pinv.cc
##
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software
hzfan commented on a change in pull request #17811: add ffi full_like, binary
ops, benchmark test
URL: https://github.com/apache/incubator-mxnet/pull/17811#discussion_r392036401
##
File path: src/operator/tensor/init_op.h
##
@@ -105,6 +105,13 @@ struct FullLikeOpParam : pu
hzfan commented on a change in pull request #17811: add ffi full_like, binary
ops, benchmark test
URL: https://github.com/apache/incubator-mxnet/pull/17811#discussion_r392034583
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -1025,8 +1023,9 @@ def subtract(x1, x2, o
samskalicky commented on issue #17569: Adding sparse support to MXTensor for
custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#issuecomment-598550796
Good stuff @guanxinq ! I think we're really close the finish line. I only
had small suggestions. Overall strategy l
samskalicky commented on a change in pull request #17569: Adding sparse support
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392031236
##
File path: example/extensions/lib_custom_op/transcsr_lib.cc
##
@@ -0,0 +1,195
samskalicky commented on a change in pull request #17569: Adding sparse support
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392031386
##
File path: example/extensions/lib_custom_op/transcsr_lib.cc
##
@@ -0,0 +1,195
samskalicky commented on a change in pull request #17569: Adding sparse support
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392030868
##
File path: example/extensions/lib_custom_op/transcsr_lib.cc
##
@@ -0,0 +1,195
samskalicky commented on a change in pull request #17569: Adding sparse support
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392030429
##
File path: example/extensions/lib_custom_op/transcsr_lib.cc
##
@@ -0,0 +1,195
samskalicky commented on a change in pull request #17569: Adding sparse support
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392026937
##
File path: include/mxnet/lib_api.h
##
@@ -214,6 +214,18 @@ enum MXDType {
samskalicky commented on a change in pull request #17569: Adding sparse support
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392026937
##
File path: include/mxnet/lib_api.h
##
@@ -214,6 +214,18 @@ enum MXDType {
samskalicky commented on a change in pull request #17569: Adding sparse support
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392023970
##
File path: src/c_api/c_api.cc
##
@@ -572,12 +645,30 @@ int MXLoadLib(const ch
samskalicky commented on a change in pull request #17569: Adding sparse support
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392022581
##
File path: src/c_api/c_api.cc
##
@@ -162,6 +195,24 @@ void CustomFComputeDisp
samskalicky commented on a change in pull request #17569: Adding sparse support
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392022241
##
File path: src/c_api/c_api.cc
##
@@ -341,6 +412,8 @@ int MXLoadLib(const char
samskalicky commented on a change in pull request #17569: Adding sparse support
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392021465
##
File path: src/c_api/c_api.cc
##
@@ -178,6 +229,13 @@ void CustomFComputeDisp
hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops
URL: https://github.com/apache/incubator-mxnet/pull/17795#discussion_r392019600
##
File path: python/mxnet/numpy/linalg.py
##
@@ -232,7 +232,7 @@ def svd(a):
return _mx_nd_np.linalg.svd(a)
hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops
URL: https://github.com/apache/incubator-mxnet/pull/17795#discussion_r392016315
##
File path: src/api/operator/numpy/linalg/np_eigvals.cc
##
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Softwa
hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops
URL: https://github.com/apache/incubator-mxnet/pull/17795#discussion_r392018516
##
File path: src/api/operator/numpy/linalg/np_tensorinv.cc
##
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Soft
hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops
URL: https://github.com/apache/incubator-mxnet/pull/17795#discussion_r392016637
##
File path: src/api/operator/numpy/linalg/np_pinv.cc
##
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software
hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops
URL: https://github.com/apache/incubator-mxnet/pull/17795#discussion_r392016699
##
File path: src/api/operator/numpy/linalg/np_pinv.cc
##
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software
hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops
URL: https://github.com/apache/incubator-mxnet/pull/17795#discussion_r392016917
##
File path: src/api/operator/numpy/linalg/np_potrf.cc
##
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software
hzfan commented on a change in pull request #17795: [Numpy] FFI for linalg ops
URL: https://github.com/apache/incubator-mxnet/pull/17795#discussion_r392019068
##
File path: src/api/operator/numpy/linalg/np_tensorsolve.cc
##
@@ -0,0 +1,56 @@
+/*
+ * Licensed to the Apache So
samskalicky commented on a change in pull request #17569: Adding sparse support
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392019355
##
File path: include/mxnet/lib_api.h
##
@@ -1091,6 +1187,43 @@ extern "C" {
samskalicky commented on a change in pull request #17569: Adding sparse support
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392018268
##
File path: include/mxnet/lib_api.h
##
@@ -393,13 +458,21 @@ class OpResource
samskalicky commented on a change in pull request #17569: Adding sparse support
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392017949
##
File path: include/mxnet/lib_api.h
##
@@ -393,13 +458,21 @@ class OpResource
samskalicky commented on a change in pull request #17569: Adding sparse support
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392017757
##
File path: include/mxnet/lib_api.h
##
@@ -393,13 +458,21 @@ class OpResource
samskalicky commented on a change in pull request #17569: Adding sparse support
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392017457
##
File path: include/mxnet/lib_api.h
##
@@ -378,6 +436,13 @@ class OpResource {
samskalicky commented on a change in pull request #17569: Adding sparse support
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392016625
##
File path: include/mxnet/lib_api.h
##
@@ -229,20 +241,60 @@ enum MXReturnValu
samskalicky commented on a change in pull request #17569: Adding sparse support
to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r392012930
##
File path: example/extensions/lib_custom_op/gemm_lib.cc
##
@@ -140,6 +140,22
wuxun-zhang edited a comment on issue #17231: cannot quantization example
URL:
https://github.com/apache/incubator-mxnet/issues/17231#issuecomment-598524032
@venkat-kittu I have just provided a patch here
https://github.com/wuxun-zhang/incubator-mxnet/commit/c06a715985dc1db3ae2e65227bf2ef4
hzfan commented on a change in pull request #17817: [Numpy] FFI for np_where
URL: https://github.com/apache/incubator-mxnet/pull/17817#discussion_r392008301
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -7482,14 +7482,8 @@ def where(condition, x=None, y=None): # py
hzfan commented on a change in pull request #17817: [Numpy] FFI for np_where
URL: https://github.com/apache/incubator-mxnet/pull/17817#discussion_r392008301
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -7482,14 +7482,8 @@ def where(condition, x=None, y=None): # py
wuxun-zhang commented on issue #17231: cannot quantization example
URL:
https://github.com/apache/incubator-mxnet/issues/17231#issuecomment-598524032
@venkat-kittu I have just provided a patch here
https://github.com/wuxun-zhang/incubator-mxnet/commit/37931b624dfdb75fc0a0090a813833632e47cc
hkvision commented on issue #17822: [Question] Distributed training performance
for one worker and one server on the same node
URL:
https://github.com/apache/incubator-mxnet/issues/17822#issuecomment-598521256
> You probably want to check out
https://medium.com/apache-mxnet/intel-mlsl-mak
zixuanweeei commented on issue #17818: RNN operator produces inconsistent
gradients for h2h_bias for stacked RNNs
URL:
https://github.com/apache/incubator-mxnet/issues/17818#issuecomment-598520346
> @zixuanweeei Thank you for your fix. I verified that it does fix the issue
in CPU. We veri
keerthanvasist commented on issue #17818: RNN operator produces inconsistent
gradients for h2h_bias for stacked RNNs
URL:
https://github.com/apache/incubator-mxnet/issues/17818#issuecomment-598512194
@zixuanweeei Thank you for your fix. I verified that it does fix the issue
in CPU. We ver
This is an automated email from the ASF dual-hosted git repository.
haoj pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 18c2a26 [numpy] add magic methods for sy
haojin2 merged pull request #17807: [numpy] add magic methods for symbol
bitwise ops
URL: https://github.com/apache/incubator-mxnet/pull/17807
This is an automated message from the Apache Git Service.
To respond to the messa
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new a887966 Bump the publis
haojin2 commented on issue #17385: [NumPy] add random.geometric op
URL: https://github.com/apache/incubator-mxnet/pull/17385#issuecomment-598480020
@hzfan Can you also take a look at the FFI parts?
This is an automated message
haojin2 commented on issue #17385: [NumPy] add random.geometric op
URL: https://github.com/apache/incubator-mxnet/pull/17385#issuecomment-598479958
@AntiZpvoh Please address all comments and also resolve the conflicts.
This is
haojin2 commented on a change in pull request #17385: [NumPy] add
random.geometric op
URL: https://github.com/apache/incubator-mxnet/pull/17385#discussion_r391957825
##
File path: src/operator/numpy/random/np_geometric_op.h
##
@@ -0,0 +1,185 @@
+/*
+ * Licensed to the Apac
haojin2 commented on a change in pull request #17385: [NumPy] add
random.geometric op
URL: https://github.com/apache/incubator-mxnet/pull/17385#discussion_r391957740
##
File path: src/operator/numpy/random/np_geometric_op.h
##
@@ -0,0 +1,177 @@
+/*
+ * Licensed to the Apac
haojin2 commented on a change in pull request #17385: [NumPy] add
random.geometric op
URL: https://github.com/apache/incubator-mxnet/pull/17385#discussion_r391957920
##
File path: src/operator/numpy/random/np_geometric_op.h
##
@@ -0,0 +1,185 @@
+/*
+ * Licensed to the Apac
sxjscience opened a new issue #17823: [Operator] Add `index_add` or
`index_update` to numpy extension
URL: https://github.com/apache/incubator-mxnet/issues/17823
We need the functionality to calculate `b = index_add(a, indices, value)`,
which mimics the outcome of `a[indices] += value`.
haojin2 commented on a change in pull request #17323: [Numpy] Kron operator
URL: https://github.com/apache/incubator-mxnet/pull/17323#discussion_r391946230
##
File path: src/operator/numpy/np_kron-inl.h
##
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Software Foundation
haojin2 commented on a change in pull request #17567: [Numpy] Add op fmax,
fmin, fmod
URL: https://github.com/apache/incubator-mxnet/pull/17567#discussion_r391943282
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -1164,6 +1165,34 @@ def mod(x1, x2, out=None, **kwarg
haojin2 commented on a change in pull request #17567: [Numpy] Add op fmax,
fmin, fmod
URL: https://github.com/apache/incubator-mxnet/pull/17567#discussion_r391943379
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -4367,6 +4396,26 @@ def maximum(x1, x2, out=None, **k
haojin2 commented on a change in pull request #17567: [Numpy] Add op fmax,
fmin, fmod
URL: https://github.com/apache/incubator-mxnet/pull/17567#discussion_r391943076
##
File path: python/mxnet/symbol/numpy/_symbol.py
##
@@ -4156,6 +4171,11 @@ def any(a, axis=None, out=None
haojin2 commented on a change in pull request #17759: [numpy] FFI for insert \
delete \ matmul etc.
URL: https://github.com/apache/incubator-mxnet/pull/17759#discussion_r391937750
##
File path: src/operator/tensor/init_op.h
##
@@ -215,6 +215,21 @@ struct RangeParam : publi
haojin2 commented on a change in pull request #17759: [numpy] FFI for insert \
delete \ matmul etc.
URL: https://github.com/apache/incubator-mxnet/pull/17759#discussion_r391936815
##
File path: src/operator/tensor/init_op.h
##
@@ -215,6 +215,21 @@ struct RangeParam : publi
eric-haibin-lin commented on issue #17822: [Question] Distributed training
performance for one worker and one server on the same node
URL:
https://github.com/apache/incubator-mxnet/issues/17822#issuecomment-598399435
You probably want to check out
https://medium.com/apache-mxnet/intel-mls
mk-61 edited a comment on issue #16173: Saving and loading cudNN autotune and
graph optimization
URL:
https://github.com/apache/incubator-mxnet/issues/16173#issuecomment-598392266
How about adding a method to Symbol, to calculate certain aspects of a
model? In C API it would look somethin
mk-61 commented on issue #16173: Saving and loading cudNN autotune and graph
optimization
URL:
https://github.com/apache/incubator-mxnet/issues/16173#issuecomment-598392266
How about adding a method to Symbol, to calculate certain aspects of a
model? In C API it would look something like:
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new defa8c3 Bump the publis
RuRo commented on issue #17734: [MXNET-889] Implement ONNX export for gluon
LSTM.
URL: https://github.com/apache/incubator-mxnet/pull/17734#issuecomment-598346865
The CI seems to be borked again btw 😩
This is an automated mes
This is an automated email from the ASF dual-hosted git repository.
haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from bd6e917 [numpy] add op random.f (#17586)
add dfbcf6f fix np.clip scalar input case (#17788)
No new rev
This is an automated email from the ASF dual-hosted git repository.
haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from bd6e917 [numpy] add op random.f (#17586)
add dfbcf6f fix np.clip scalar input case (#17788)
No new rev
haojin2 merged pull request #17788: [Numpy] Fix np.clip in scalar case
URL: https://github.com/apache/incubator-mxnet/pull/17788
This is an automated message from the Apache Git Service.
To respond to the message, please log
haojin2 closed issue #17787: [Numpy] np.clip does not support scalar
URL: https://github.com/apache/incubator-mxnet/issues/17787
This is an automated message from the Apache Git Service.
To respond to the message, please log
bricksdont edited a comment on issue #7375: Can I set instance weight when
training?
URL:
https://github.com/apache/incubator-mxnet/issues/7375#issuecomment-598234055
Here is a gist with an actual implementation of batch-weighted cross-entropy
loss that I believe can replace the default `
bricksdont commented on issue #7375: Can I set instance weight when training?
URL:
https://github.com/apache/incubator-mxnet/issues/7375#issuecomment-598234055
Here is a gist with an actual implementation of batch-weighted cross-entropy
loss that I believe can replace the default `SoftmaxO
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new a805766 Bump the publis
hkvision opened a new issue #17822: [Question] Distributed training performance
for one worker and one server on the same node
URL: https://github.com/apache/incubator-mxnet/issues/17822
Hi,
I’m referring to this page
https://mxnet.apache.org/api/faq/distributed_training.html for di
aGiant commented on issue #17814:
mxnet.gluon.data.vision.transforms.Normalize(mean=0.0, std=1.0) tuple issue
within hybird_forward()
URL:
https://github.com/apache/incubator-mxnet/issues/17814#issuecomment-598112586
> For the first example, I noticed that you are defining both
`hybrid_f
This is an automated email from the ASF dual-hosted git repository.
haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 713d962 [Numpy] FFI: Bincount, Percentile/Quantile, All/Any (#17717)
add bd6e917 [numpy] add op random.
This is an automated email from the ASF dual-hosted git repository.
haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 713d962 [Numpy] FFI: Bincount, Percentile/Quantile, All/Any (#17717)
add bd6e917 [numpy] add op random.
This is an automated email from the ASF dual-hosted git repository.
haoj pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new bd6e917 [numpy] add op random.f (#17586)
haojin2 merged pull request #17586: [numpy] add op random.f
URL: https://github.com/apache/incubator-mxnet/pull/17586
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHu
aGiant commented on issue #17814:
mxnet.gluon.data.vision.transforms.Normalize(mean=0.0, std=1.0) tuple issue
within hybird_forward()
URL:
https://github.com/apache/incubator-mxnet/issues/17814#issuecomment-598071179
> For the first example, I noticed that you are defining both
`hybrid_f
hzfan commented on a change in pull request #17779: [Numpy] FFI Invocation for
Unary Ops
URL: https://github.com/apache/incubator-mxnet/pull/17779#discussion_r391449589
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -1933,7 +1933,7 @@ def _unary_func_helper(x, fn_ar
75 matches
Mail list logo