This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 893e3ef Bump the publis
xiexuliunian opened a new issue #17821: support onnx version
URL: https://github.com/apache/incubator-mxnet/issues/17821
## Description
Today I convert my mxnet model to onnx,but I find the prelu slope is just a
int
![image](https://user-images.githubusercontent.com/24749356/76492051
hzfan commented on a change in pull request #17812: [numpy] FFI binary bitwise
ops
URL: https://github.com/apache/incubator-mxnet/pull/17812#discussion_r391414897
##
File path: src/api/operator/numpy/np_elemwise_broadcast_op.cc
##
@@ -36,4 +36,60 @@ MXNET_REGISTER_API("_n
hzfan commented on a change in pull request #17812: [numpy] FFI binary bitwise
ops
URL: https://github.com/apache/incubator-mxnet/pull/17812#discussion_r391415544
##
File path: src/api/operator/numpy/np_elemwise_broadcast_op.cc
##
@@ -36,4 +36,60 @@ MXNET_REGISTER_API("_n
hzfan commented on a change in pull request #17812: [numpy] FFI binary bitwise
ops
URL: https://github.com/apache/incubator-mxnet/pull/17812#discussion_r391415985
##
File path: src/api/operator/numpy/np_elemwise_broadcast_op.cc
##
@@ -36,4 +36,60 @@ MXNET_REGISTER_API("_n
hzfan commented on a change in pull request #17756: [numpy] FFI
random.exponential, logistic, gumbel, rayleigh, weibull, power, pareto
URL: https://github.com/apache/incubator-mxnet/pull/17756#discussion_r391412016
##
File path: src/operator/numpy/random/np_power_op.h
##
@
hzfan commented on a change in pull request #17756: [numpy] FFI
random.exponential, logistic, gumbel, rayleigh, weibull, power, pareto
URL: https://github.com/apache/incubator-mxnet/pull/17756#discussion_r391411320
##
File path: src/api/operator/numpy/random/np_exponential_op.cc
##
hzfan commented on a change in pull request #17756: [numpy] FFI
random.exponential, logistic, gumbel, rayleigh, weibull, power, pareto
URL: https://github.com/apache/incubator-mxnet/pull/17756#discussion_r391411970
##
File path: src/operator/numpy/random/np_power_op.h
##
@
hzfan commented on a change in pull request #17756: [numpy] FFI
random.exponential, logistic, gumbel, rayleigh, weibull, power, pareto
URL: https://github.com/apache/incubator-mxnet/pull/17756#discussion_r391412016
##
File path: src/operator/numpy/random/np_power_op.h
##
@
hzfan commented on a change in pull request #17756: [numpy] FFI
random.exponential, logistic, gumbel, rayleigh, weibull, power, pareto
URL: https://github.com/apache/incubator-mxnet/pull/17756#discussion_r391411970
##
File path: src/operator/numpy/random/np_power_op.h
##
@
hzfan commented on a change in pull request #17756: [numpy] FFI
random.exponential, logistic, gumbel, rayleigh, weibull, power, pareto
URL: https://github.com/apache/incubator-mxnet/pull/17756#discussion_r391413729
##
File path: src/operator/numpy/random/np_power_op.h
##
@
hzfan commented on a change in pull request #17756: [numpy] FFI
random.exponential, logistic, gumbel, rayleigh, weibull, power, pareto
URL: https://github.com/apache/incubator-mxnet/pull/17756#discussion_r391413684
##
File path: src/operator/numpy/random/np_power_op.h
##
@
hzfan commented on a change in pull request #17756: [numpy] FFI
random.exponential, logistic, gumbel, rayleigh, weibull, power, pareto
URL: https://github.com/apache/incubator-mxnet/pull/17756#discussion_r391413567
##
File path: src/api/operator/numpy/random/np_exponential_op.cc
##
hzfan commented on a change in pull request #17756: [numpy] FFI
random.exponential, logistic, gumbel, rayleigh, weibull, power, pareto
URL: https://github.com/apache/incubator-mxnet/pull/17756#discussion_r391411320
##
File path: src/api/operator/numpy/random/np_exponential_op.cc
##
haojin2 commented on issue #17811: add ffi full_like, binary ops, benchmark test
URL: https://github.com/apache/incubator-mxnet/pull/17811#issuecomment-598011340
@hzfan This PR looks good to you?
This is an automated message f
frankfliu edited a comment on issue #17818: RNN operator produces inconsistent
gradients for h2h_bias for stacked RNNs
URL:
https://github.com/apache/incubator-mxnet/issues/17818#issuecomment-598007352
@zixuanweeei Thank for your quick response. We will try it out and let you
know the res
frankfliu commented on issue #17818: RNN operator produces inconsistent
gradients for h2h_bias for stacked RNNs
URL:
https://github.com/apache/incubator-mxnet/issues/17818#issuecomment-598007352
@zixuanweeei Thank for your quick response. We will try it out and let you
know the results.
zixuanweeei commented on issue #17818: RNN operator produces inconsistent
gradients for h2h_bias for stacked RNNs
URL:
https://github.com/apache/incubator-mxnet/issues/17818#issuecomment-597994041
Hi, @keerthanvasist . Thanks for the information about the DJL.
We provided a patch f
keerthanvasist commented on issue #17818: RNN operator produces inconsistent
gradients for h2h_bias for stacked RNNs
URL:
https://github.com/apache/incubator-mxnet/issues/17818#issuecomment-597989766
DJL is an open-source, framework-agnostic deep learning java library. You
can check it ou
pengzhao-intel commented on issue #17818: RNN operator produces inconsistent
gradients for h2h_bias for stacked RNNs
URL:
https://github.com/apache/incubator-mxnet/issues/17818#issuecomment-597980656
@keerthanvasist thanks to reporting the issue.
Could you give us some background on yo
rongzha1 opened a new pull request #17820: optimize for Reduce OP: norm sum
using default axis parameter on CPU backend
URL: https://github.com/apache/incubator-mxnet/pull/17820
## Description ##
reduced norm and sum are wildly used in [DGL
](https://github.com/dmlc/dgl)models, such as
zixuanweeei commented on issue #17818: RNN operator produces inconsistent
gradients for h2h_bias for stacked RNNs
URL:
https://github.com/apache/incubator-mxnet/issues/17818#issuecomment-597973937
Thanks for reporting this issue. I can reproduce the result by your script.
I will take a lo
wuxun-zhang commented on issue #17231: cannot quantization example
URL:
https://github.com/apache/incubator-mxnet/issues/17231#issuecomment-597958191
> I have tried mxnet docker image and now I am getting a new error while
running same command as below
> INFO:logger:Collected statistics
pengzhao-intel commented on issue #17818: RNN operator produces inconsistent
gradients for h2h_bias for stacked RNNs
URL:
https://github.com/apache/incubator-mxnet/issues/17818#issuecomment-597953287
@zixuanweeei @ciyongch please help take a look of this issue, thanks.
---
pengzhao-intel commented on issue #17818: RNN operator produces inconsistent
gradients for h2h_bias for stacked RNNs
URL:
https://github.com/apache/incubator-mxnet/issues/17818#issuecomment-597953295
@zixuanweeei @ciyongch please help take a look at this issue, thanks.
---
pengzhao-intel removed a comment on issue #17818: RNN operator produces
inconsistent gradients for h2h_bias for stacked RNNs
URL:
https://github.com/apache/incubator-mxnet/issues/17818#issuecomment-597953295
@zixuanweeei @ciyongch please help take a look at this issue, thanks.
---
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 6bb4efb Bump the publis
leezu commented on issue #17815: Add SOVERSION when build shared libmxnet.so
library
URL: https://github.com/apache/incubator-mxnet/pull/17815#issuecomment-597939587
Wildcards work. For example, `python/mxnet/_ffi/_cy3/*.so` is already used.
rbavery opened a new issue #17819: Why is nvcc not installed along with cuda in
mxnet docker builds?
URL: https://github.com/apache/incubator-mxnet/issues/17819
I'm trying to reproduce this repository: https://github.com/msracver/FCIS,
which depends on an older version of mxnet, 1.4.1. I'v
zhreshold commented on issue #17814:
mxnet.gluon.data.vision.transforms.Normalize(mean=0.0, std=1.0) tuple issue
within hybird_forward()
URL:
https://github.com/apache/incubator-mxnet/issues/17814#issuecomment-597936127
For the first example, I noticed that you are defining both `hybrid_f
reminisce commented on issue #17810: GraphExecutor + Numpy + Dynamic shape crash
URL:
https://github.com/apache/incubator-mxnet/issues/17810#issuecomment-597917475
This is because unknown shape `NDArray`s are assigned with default
`storage_shape=(0,)`
https://github.com/apache/incu
szha edited a comment on issue #17818: RNN operator produces inconsistent
gradients for h2h_bias for stacked RNNs
URL:
https://github.com/apache/incubator-mxnet/issues/17818#issuecomment-597916231
cc @pengzhao-intel
This is
szha commented on issue #17818: RNN operator produces inconsistent gradients
for h2h_bias for stacked RNNs
URL:
https://github.com/apache/incubator-mxnet/issues/17818#issuecomment-597916231
cc @PatricZhao
This is an automat
keerthanvasist commented on issue #17818: RNN operator produces inconsistent
gradients for h2h_bias for stacked RNNs
URL:
https://github.com/apache/incubator-mxnet/issues/17818#issuecomment-597912905
@mxnet-label-bot add [Backend]
--
guanxinq commented on a change in pull request #17569: Adding sparse support to
MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r391260016
##
File path: include/mxnet/lib_api.h
##
@@ -,16 +1197,49 @@ extern "C" {
guanxinq commented on a change in pull request #17569: Adding sparse support to
MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r391259899
##
File path: src/c_api/c_api.cc
##
@@ -114,13 +114,19 @@ void CustomFComputeDispat
keerthanvasist opened a new issue #17818: RNN operator produces inconsistent
gradients for h2h_bias for stacked RNNs
URL: https://github.com/apache/incubator-mxnet/issues/17818
The RNN operator produces different inconsistent values for gradient for
h2h_bias in the topmost stack in differe
sl1pkn07 commented on issue #17815: Add SOVERSION when build shared libmxnet.so
library
URL: https://github.com/apache/incubator-mxnet/pull/17815#issuecomment-597838839
Hi
That accepts things like 'libmxnet.so.*'?, Because the version depends on
the value provided by 'include/mxnet.
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new bb79352 Bump the publis
sjtuWangDing opened a new pull request #17817: [Numpy] FFI for np_where
URL: https://github.com/apache/incubator-mxnet/pull/17817
## Description ##
FFI for np_where
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items for your PR.
- [ ] The PR t
hzfan opened a new pull request #17816: [Numpy] FFI: split and svd
URL: https://github.com/apache/incubator-mxnet/pull/17816
## Description ##
Support multiple output ndarrays. Examples:
- split
- svd
## Checklist ##
### Essentials ###
Please feel free to remove inappli
zhreshold edited a comment on issue #17774:
test_recordimage_dataset_with_data_loader_multiworker Segmentation fault on OSX
URL:
https://github.com/apache/incubator-mxnet/issues/17774#issuecomment-597463041
@leezu
Let me update the mac test metric with `clang`
```
Apple clang
ArmageddonKnight commented on a change in pull request #17656: [MXNET-1404]
Implement storage tagging, the first half of the memory profiler
URL: https://github.com/apache/incubator-mxnet/pull/17656#discussion_r391161083
##
File path: ci/docker/runtime_functions.sh
##
@@ -
ArmageddonKnight commented on a change in pull request #17656: [MXNET-1404]
Implement storage tagging, the first half of the memory profiler
URL: https://github.com/apache/incubator-mxnet/pull/17656#discussion_r391161083
##
File path: ci/docker/runtime_functions.sh
##
@@ -
ArmageddonKnight commented on a change in pull request #17656: [MXNET-1404]
Implement storage tagging, the first half of the memory profiler
URL: https://github.com/apache/incubator-mxnet/pull/17656#discussion_r391161083
##
File path: ci/docker/runtime_functions.sh
##
@@ -
zachgk commented on issue #17755: Jave and scala nightly tests broken
URL:
https://github.com/apache/incubator-mxnet/issues/17755#issuecomment-597776607
The build passed and the nightly tests are now
[passing](http://jenkins.mxnet-ci.amazon-ml.com/job/NightlyTests/job/master/621/).
-
zachgk closed issue #17755: Jave and scala nightly tests broken
URL: https://github.com/apache/incubator-mxnet/issues/17755
This is an automated message from the Apache Git Service.
To respond to the message, please log on t
leezu edited a comment on issue #17815: Add SOVERSION when build shared
libmxnet.so library
URL: https://github.com/apache/incubator-mxnet/pull/17815#issuecomment-597773010
You may need to update the CI settings to be more flexible about the name of
the `.so` file:
See the locations her
leezu commented on issue #17815: Add SOVERSION when build shared libmxnet.so
library
URL: https://github.com/apache/incubator-mxnet/pull/17815#issuecomment-597773010
You may need to update the CI settings to be more flexible about the name of
the `.so` file:
See the locations here
http
leezu commented on issue #17810: GraphExecutor + Numpy + Dynamic shape crash
URL:
https://github.com/apache/incubator-mxnet/issues/17810#issuecomment-597769597
`HybridBlock` / `CachedOp` is covered by unittests and seems to work
-
eric-haibin-lin commented on issue #17810: GraphExecutor + Numpy + Dynamic
shape crash
URL:
https://github.com/apache/incubator-mxnet/issues/17810#issuecomment-597743431
does it only happen with graph executor?
This is an a
leezu commented on a change in pull request #17656: [MXNET-1404] Implement
storage tagging, the first half of the memory profiler
URL: https://github.com/apache/incubator-mxnet/pull/17656#discussion_r391091356
##
File path: ci/docker/runtime_functions.sh
##
@@ -715,7 +715,
aaronmarkham commented on issue #17808: [WIP] Windows dev environment
configuration, update install instructions from source in the docs
URL: https://github.com/apache/incubator-mxnet/pull/17808#issuecomment-597728044
Let me know when this is ready, and I'll test it out.
--
leezu commented on a change in pull request #17656: [MXNET-1404] Implement
storage tagging, the first half of the memory profiler
URL: https://github.com/apache/incubator-mxnet/pull/17656#discussion_r391091356
##
File path: ci/docker/runtime_functions.sh
##
@@ -715,7 +715,
marcoabreu commented on issue #17799: optimization debian package manager tweaks
URL: https://github.com/apache/incubator-mxnet/pull/17799#issuecomment-597724715
But I honestly don't see the benefit here, I'm sorry. Saving a few hundred
megabytes on a system that processes terabytes per day
sl1pkn07 opened a new pull request #17815: Add SOVERSION when build shared
libmxnet.so library
URL: https://github.com/apache/incubator-mxnet/pull/17815
## Description ##
https://en.wikipedia.org/wiki/Soname
https://cmake.org/cmake/help/latest/prop_tgt/SOVERSION.html
## Checkli
ArmageddonKnight commented on a change in pull request #17656: [MXNET-1404]
Implement storage tagging, the first half of the memory profiler
URL: https://github.com/apache/incubator-mxnet/pull/17656#discussion_r391022209
##
File path: ci/docker/runtime_functions.sh
##
@@ -
ArmageddonKnight commented on a change in pull request #17656: [MXNET-1404]
Implement storage tagging, the first half of the memory profiler
URL: https://github.com/apache/incubator-mxnet/pull/17656#discussion_r391022209
##
File path: ci/docker/runtime_functions.sh
##
@@ -
ArmageddonKnight commented on a change in pull request #17656: [MXNET-1404]
Implement storage tagging, the first half of the memory profiler
URL: https://github.com/apache/incubator-mxnet/pull/17656#discussion_r391016645
##
File path: ci/docker/runtime_functions.sh
##
@@ -
RuRo edited a comment on issue #17734: [MXNET-889] Implement ONNX export for
gluon LSTM.
URL: https://github.com/apache/incubator-mxnet/pull/17734#issuecomment-597648893
@anirudhacharya I haven't implemented any unittests yet, but the code **is**
tested. I've provided an example test case
RuRo edited a comment on issue #17734: [MXNET-889] Implement ONNX export for
gluon LSTM.
URL: https://github.com/apache/incubator-mxnet/pull/17734#issuecomment-597648893
@anirudhacharya I haven't implemented any unittests yet, but the code **is**
tested. I've provided an example test case
RuRo edited a comment on issue #17734: [MXNET-889] Implement ONNX export for
gluon LSTM.
URL: https://github.com/apache/incubator-mxnet/pull/17734#issuecomment-597648893
@anirudhacharya I haven't implemented any unittests yet, but the code **is**
tested. I've provided an example test case
RuRo commented on issue #17734: [MXNET-889] Implement ONNX export for gluon
LSTM.
URL: https://github.com/apache/incubator-mxnet/pull/17734#issuecomment-597648893
@anirudhacharya I haven't implemented any unittests yet, but the code **is**
tested. I've provided an example test case in my o
chichivica commented on issue #14369: mxnet build error: with flag USE_TENSORRT
URL:
https://github.com/apache/incubator-mxnet/issues/14369#issuecomment-597628964
Same issue here.
@sleepwalker2017 Hey! How did you solve that?
---
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new a3ef1f1 Bump the publis
venkat-kittu commented on issue #17231: cannot quantization example
URL:
https://github.com/apache/incubator-mxnet/issues/17231#issuecomment-597581414
@wuxun-zhang Nope it's not working, it's only working when I keep
--num-calib-batches=33. Except for numbers less than 33, it's not working
Alex-Ob edited a comment on issue #17813: Non-compiling code
URL:
https://github.com/apache/incubator-mxnet/issues/17813#issuecomment-597574341
The problem was solved by commenting out two `#include "op.h"` directives(
`operator.hpp:36` and `MxNetCpp.h:37`). Now it builds correctly.
Alex-Ob closed issue #17813: Non-compiling code
URL: https://github.com/apache/incubator-mxnet/issues/17813
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
Alex-Ob commented on issue #17813: Non-compiling code
URL:
https://github.com/apache/incubator-mxnet/issues/17813#issuecomment-597574341
The problem was solved by commenting out two include directives(
operator.hpp:36 and MxNetCpp.h:37). Now it builds correctly.
--
aGiant commented on issue #17814:
mxnet.gluon.data.vision.transforms.Normalize(mean=0.0, std=1.0) tuple issue
within hybird_forward()
URL:
https://github.com/apache/incubator-mxnet/issues/17814#issuecomment-597561951
Also tried to reproduce the demo code from link:
https://mxnet.apache.o
aGiant opened a new issue #17814:
mxnet.gluon.data.vision.transforms.Normalize(mean=0.0, std=1.0) tuple issue
within hybird_forward()
URL: https://github.com/apache/incubator-mxnet/issues/17814
From official mxnet web:
https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/trai
Alex-Ob opened a new issue #17813: Non-compiling code
URL: https://github.com/apache/incubator-mxnet/issues/17813
## Description
I can't compile whole package **region_tlr_mxnet** for **Autoware.ai** due
to this error:
### Error Message
```
In file included from
/h
leezu commented on a change in pull request #17656: [MXNET-1404] Implement
storage tagging, the first half of the memory profiler
URL: https://github.com/apache/incubator-mxnet/pull/17656#discussion_r390823804
##
File path: ci/docker/runtime_functions.sh
##
@@ -715,7 +715,
leezu commented on a change in pull request #17656: [MXNET-1404] Implement
storage tagging, the first half of the memory profiler
URL: https://github.com/apache/incubator-mxnet/pull/17656#discussion_r390823804
##
File path: ci/docker/runtime_functions.sh
##
@@ -715,7 +715,
hzfan commented on a change in pull request #17812: [numpy] FFI binary bitwise
ops
URL: https://github.com/apache/incubator-mxnet/pull/17812#discussion_r390781243
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -5848,7 +5848,9 @@ def bitwise_and(x1, x2, out=None, **
haojin2 commented on a change in pull request #17812: [numpy] FFI binary
bitwise ops
URL: https://github.com/apache/incubator-mxnet/pull/17812#discussion_r390781322
##
File path: src/api/operator/numpy/np_elemwise_broadcast_op.cc
##
@@ -36,4 +36,29 @@ MXNET_REGISTER_API("
Yiyan66 opened a new pull request #17812: [numpy] FFI binary bitwise ops
URL: https://github.com/apache/incubator-mxnet/pull/17812
## Description ##
FFI bitwise_and, bitwise_or, bitwise_xor
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items for
Alicia1529 opened a new pull request #17811: add ffi full_like, binary ops,
benchmark test
URL: https://github.com/apache/incubator-mxnet/pull/17811
## Description ##
add ffi invocation for full_like, subtract, divide, true_divide, mod,
remainder, power, multiply
--
78 matches
Mail list logo