ciyongch commented on a change in pull request #15950: [MKLDNN]Support
fullyconnected and element-wise ops fusion
URL: https://github.com/apache/incubator-mxnet/pull/15950#discussion_r316002034
##
File path: src/operator/subgraph/mkldnn/mkldnn_fc_property.h
##
@@ -68,44
hzfan commented on a change in pull request #15938: Tvm broadcast backward
URL: https://github.com/apache/incubator-mxnet/pull/15938#discussion_r316000342
##
File path: src/operator/contrib/tvmop/ufunc.cc
##
@@ -37,29 +38,88 @@ namespace op {
static constexpr char
apeforest commented on issue #15545: Softmax optimization for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15545#issuecomment-523292097
Thanks for refactoring the Softmax functions to make it into one.
This is an
reminisce commented on a change in pull request #15938: Tvm broadcast backward
URL: https://github.com/apache/incubator-mxnet/pull/15938#discussion_r315995425
##
File path: src/operator/contrib/tvmop/ufunc.cc
##
@@ -37,29 +38,88 @@ namespace op {
static constexpr char
apeforest commented on a change in pull request #15545: Softmax optimization
for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15545#discussion_r315994975
##
File path: src/operator/nn/softmax-inl.h
##
@@ -188,89 +180,77 @@ struct log_softmax_bwd {
}
};
-
mahmoodn opened a new issue #15957: error: call of overloaded is ambiguous
URL: https://github.com/apache/incubator-mxnet/issues/15957
With gcc 7.4 and cuda 10 and mxnet-1.4.1, I get this compilation error
```
include/mxnet/././ndarray.h:169:78: error: call of overloaded
xidulu opened a new pull request #15956: [Numpy] random.randint() implemented
URL: https://github.com/apache/incubator-mxnet/pull/15956
## Description ##
np.random.randint()
https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.randint.html
## Checklist ##
hzfan commented on a change in pull request #15938: Tvm broadcast backward
URL: https://github.com/apache/incubator-mxnet/pull/15938#discussion_r315983914
##
File path: src/operator/contrib/tvmop/ufunc.cc
##
@@ -37,29 +38,88 @@ namespace op {
static constexpr char
hzfan commented on a change in pull request #15938: Tvm broadcast backward
URL: https://github.com/apache/incubator-mxnet/pull/15938#discussion_r315983703
##
File path: contrib/tvmop/basic/ufunc.py
##
@@ -48,3 +50,71 @@ def vadd_gpu(dtype, ndim):
s[C].bind(bx,
hzfan commented on a change in pull request #15938: Tvm broadcast backward
URL: https://github.com/apache/incubator-mxnet/pull/15938#discussion_r315980043
##
File path: src/operator/contrib/tvmop/ufunc.cc
##
@@ -37,29 +38,88 @@ namespace op {
static constexpr char
DickJC123 opened a new pull request #15955: Debug laop 6
URL: https://github.com/apache/incubator-mxnet/pull/15955
## Description ##
Do not merge this PR. This PR is experimenting with seeds of the
test_laop_6 unittest. The PR is seeking to learn if errors seen with the
64-bit
hzfan commented on a change in pull request #15938: Tvm broadcast backward
URL: https://github.com/apache/incubator-mxnet/pull/15938#discussion_r315980043
##
File path: src/operator/contrib/tvmop/ufunc.cc
##
@@ -37,29 +38,88 @@ namespace op {
static constexpr char
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 7fb8860 Bump the publish
marcoabreu commented on issue #15167: Pointwise fusion for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#issuecomment-523250049
Okay, so lets have a discussion on dev@ and then agree on a style.
This is an
larroy commented on a change in pull request #15063: Rename np_compat to
np_shape
URL: https://github.com/apache/incubator-mxnet/pull/15063#discussion_r315958800
##
File path: include/mxnet/c_api.h
##
@@ -1067,14 +1067,14 @@ MXNET_DLL int MXAutogradIsTraining(bool* curr);
anirudh2290 merged pull request #15574: fix naive engine for multi-threaded
inference
URL: https://github.com/apache/incubator-mxnet/pull/15574
This is an automated message from the Apache Git Service.
To respond to the
This is an automated email from the ASF dual-hosted git repository.
anirudh2290 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 308e4ac Adding tests to verify support for Large Tensors in
additional Ops along with new C_Apis
haojin2 commented on issue #15951: Revert "Numpy-compatible concatenate
upstream"
URL: https://github.com/apache/incubator-mxnet/pull/15951#issuecomment-523243342
@larroy I think the author of #13818 is @larroy , isn't it?
I was quoting your comment:
larroy commented on a change in pull request #15545: Softmax optimization for
GPU
URL: https://github.com/apache/incubator-mxnet/pull/15545#discussion_r315954114
##
File path: src/operator/nn/softmax-inl.h
##
@@ -313,71 +294,134 @@ __global__ void
larroy commented on a change in pull request #15545: Softmax optimization for
GPU
URL: https://github.com/apache/incubator-mxnet/pull/15545#discussion_r315953722
##
File path: src/operator/nn/softmax-inl.h
##
@@ -313,71 +294,134 @@ __global__ void
larroy commented on a change in pull request #15545: Softmax optimization for
GPU
URL: https://github.com/apache/incubator-mxnet/pull/15545#discussion_r315953443
##
File path: src/operator/nn/softmax-inl.h
##
@@ -301,7 +282,7 @@ __global__ void
apeforest commented on issue #15613: [Discussion] 1.5.1 Patch Release
URL:
https://github.com/apache/incubator-mxnet/issues/15613#issuecomment-523242083
This PR fixes a memory misalignment bug in topk operator introduced
recently. Please add it to 1.5.1 patch release:
samskalicky commented on issue #15613: [Discussion] 1.5.1 Patch Release
URL:
https://github.com/apache/incubator-mxnet/issues/15613#issuecomment-523240695
@TaoLv I'd like to pull in the following PRs, they are necessary fixes for
some of my use-cases:
larroy commented on a change in pull request #15545: Softmax optimization for
GPU
URL: https://github.com/apache/incubator-mxnet/pull/15545#discussion_r315952016
##
File path: src/operator/nn/softmax-inl.h
##
@@ -188,89 +180,77 @@ struct log_softmax_bwd {
}
};
-
larroy commented on issue #15951: Revert "Numpy-compatible concatenate upstream"
URL: https://github.com/apache/incubator-mxnet/pull/15951#issuecomment-523239737
@marcoabreu I think you can close this empty PR, since the conversation
here is not constructive and there's no code change.
larroy edited a comment on issue #15951: Revert "Numpy-compatible concatenate
upstream"
URL: https://github.com/apache/incubator-mxnet/pull/15951#issuecomment-523238480
@haojin2 I think you are confusing me with @marcoabreu. I didn't make the
change you mention. My handle is @larroy.
larroy commented on issue #15951: Revert "Numpy-compatible concatenate upstream"
URL: https://github.com/apache/incubator-mxnet/pull/15951#issuecomment-523238480
@haojin2 I think you are confusing me with @marcoabreu. I didn't make the
change you mention. My handle is @larroy.
ptrendx commented on a change in pull request #15545: Softmax optimization for
GPU
URL: https://github.com/apache/incubator-mxnet/pull/15545#discussion_r315948857
##
File path: src/operator/nn/softmax-inl.h
##
@@ -218,6 +219,157 @@ __global__ void
eric-haibin-lin edited a comment on issue #15930: Fix dtype inference in
arange_like operator
URL: https://github.com/apache/incubator-mxnet/pull/15930#issuecomment-523225836
Yes. Could you also check the forwward output with [0, 1, 2,.. ] etc?
eric-haibin-lin commented on issue #15930: Fix dtype inference in arange_like
operator
URL: https://github.com/apache/incubator-mxnet/pull/15930#issuecomment-523225836
Yes
This is an automated message from the Apache Git
eric-haibin-lin commented on a change in pull request #15545: Softmax
optimization for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15545#discussion_r315936560
##
File path: src/operator/nn/softmax-inl.h
##
@@ -218,6 +219,157 @@ __global__ void
larroy closed pull request #14601: [WIP] Add test for gemm overflow.
URL: https://github.com/apache/incubator-mxnet/pull/14601
This is an automated message from the Apache Git Service.
To respond to the message, please log
haojin2 commented on issue #15951: Revert "Numpy-compatible concatenate
upstream"
URL: https://github.com/apache/incubator-mxnet/pull/15951#issuecomment-523207697
@larroy Please don't avoid my question, in #15952 you claimed that I shall
tag you on any changes to files under CI folder, so
ZhennanQin commented on a change in pull request #15950: [MKLDNN]Support
fullyconnected and element-wise ops fusion
URL: https://github.com/apache/incubator-mxnet/pull/15950#discussion_r315916256
##
File path: src/operator/subgraph/mkldnn/mkldnn_fc_property.h
##
@@ -68,44
haojin2 commented on issue #15951: Revert "Numpy-compatible concatenate
upstream"
URL: https://github.com/apache/incubator-mxnet/pull/15951#issuecomment-523204837
@marcoabreu Maybe let's take a look at one unix-cpu
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from cd397a3 Benchmark doc fix (#15769)
add 308e4ac Adding tests to verify support for Large Tensors
apeforest merged pull request #15895: Adding tests and C APIs for Large Tensors
URL: https://github.com/apache/incubator-mxnet/pull/15895
This is an automated message from the Apache Git Service.
To respond to the message,
larroy commented on issue #15929: ./build.py -p ubuntu_tpu_tensorrt fails with
error
URL:
https://github.com/apache/incubator-mxnet/issues/15929#issuecomment-523203768
Hi @arsdragonfly
You have a dirty working directory, you should remove the build folder or
really clean the
apeforest commented on issue #15953: Add Median,p50,p99 to python profiler
URL: https://github.com/apache/incubator-mxnet/pull/15953#issuecomment-523200903
Thanks for the contribution. How to enable the percentile output for users?
Or it comes as default?
apeforest commented on a change in pull request #15953: Add Median,p50,p99 to
python profiler
URL: https://github.com/apache/incubator-mxnet/pull/15953#discussion_r315909903
##
File path: benchmark/opperf/utils/profiler_utils.py
##
@@ -228,10 +229,16 @@ def
larroy edited a comment on issue #15951: Revert "Numpy-compatible concatenate
upstream"
URL: https://github.com/apache/incubator-mxnet/pull/15951#issuecomment-523200224
@haojin2 I just helped Marco revert the PR since due the way it was merged
it was not trivial, please don't kill the
larroy commented on issue #15951: Revert "Numpy-compatible concatenate upstream"
URL: https://github.com/apache/incubator-mxnet/pull/15951#issuecomment-523200224
@haojin2 I just helped Marco revert the PR since due the way it was merged
it was not trivial, please don't kill the messenger.
apeforest commented on issue #15895: Adding tests and C APIs for Large Tensors
URL: https://github.com/apache/incubator-mxnet/pull/15895#issuecomment-523197438
@ChaiBapchya could you please review?
This is an automated
marcoabreu commented on issue #15951: Revert "Numpy-compatible concatenate
upstream"
URL: https://github.com/apache/incubator-mxnet/pull/15951#issuecomment-523195164
Hi Hao,
this is not a personal affront towards you or your PR and I understand your
frustration. The maximum time
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 9724c7a Bump the publish
reminisce commented on a change in pull request #15938: Tvm broadcast backward
URL: https://github.com/apache/incubator-mxnet/pull/15938#discussion_r315894807
##
File path: src/operator/contrib/tvmop/ufunc.cc
##
@@ -37,29 +38,88 @@ namespace op {
static constexpr char
haojin2 opened a new pull request #15954: Revert unix-cpu timeout increase
URL: https://github.com/apache/incubator-mxnet/pull/15954
## Description ##
As title.
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items for your PR.
- [ ] The PR
PatriciaXiao commented on issue #15629: DataLoader Error using Multi processing
URL:
https://github.com/apache/incubator-mxnet/issues/15629#issuecomment-523183266
It isn't always like this: normally it won't happen, only after I tried to
upgrade Python to 3.7 on server it becomes like
haojin2 edited a comment on issue #15951: Revert "Numpy-compatible concatenate
upstream"
URL: https://github.com/apache/incubator-mxnet/pull/15951#issuecomment-523179089
@larroy So if you're really that actively in charge of CI stuff then why
don't you root cause it while this very PR you
haojin2 commented on issue #15951: Revert "Numpy-compatible concatenate
upstream"
URL: https://github.com/apache/incubator-mxnet/pull/15951#issuecomment-523179089
@larroy So if you're really that actively in charge of CI stuff then why
don't you root cause it while this very PR you guys
haojin2 closed pull request #15952: Revert #15842 and #15894. These PRs should
not modify CI timeouts.
URL: https://github.com/apache/incubator-mxnet/pull/15952
This is an automated message from the Apache Git Service.
To
haojin2 commented on issue #15952: Revert #15842 and #15894. These PRs should
not modify CI timeouts.
URL: https://github.com/apache/incubator-mxnet/pull/15952#issuecomment-523172593
I'll revert the CI time limit in a separate PR myself, closing this one now.
ChaiBapchya opened a new pull request #15953: Add Median,p50,p99 to python
profiler
URL: https://github.com/apache/incubator-mxnet/pull/15953
## Description ##
Profile more metrics (than just average; also fix incorrect avg calculation)
## Checklist ##
### Essentials ###
larroy commented on issue #15951: Revert "Numpy-compatible concatenate upstream"
URL: https://github.com/apache/incubator-mxnet/pull/15951#issuecomment-523171026
@haojin2 increase on CI limit should be discussed first and be done in a
separate PR. We have gone from 1:20 to 3h, we should
larroy commented on issue #15952: Revert #15842 and #15894. These PRs should
not modify CI timeouts.
URL: https://github.com/apache/incubator-mxnet/pull/15952#issuecomment-523170610
Is not really a duplicate as the PR you linked has an empty patch.
larroy commented on issue #15952: Revert #15842 and #15894. These PRs should
not modify CI timeouts.
URL: https://github.com/apache/incubator-mxnet/pull/15952#issuecomment-523170205
I couldn't revert just one, as the other builds on top and creates a
conflict while reverting. Feel free to
larroy commented on issue #15951: Revert "Numpy-compatible concatenate upstream"
URL: https://github.com/apache/incubator-mxnet/pull/15951#issuecomment-523170377
This PR has an empty patch as they were several empty commits. @marcoabreu
I think you should close this one.
haojin2 commented on issue #15952: Revert #15842 and #15894. These PRs should
not modify CI timeouts.
URL: https://github.com/apache/incubator-mxnet/pull/15952#issuecomment-523164907
Please let me know why #15842 also has to be reverted, seems like it has
nothing to do with CI?
larroy opened a new pull request #15952: Revert #15842 and #15894. These PRs
should not modify CI timeouts.
URL: https://github.com/apache/incubator-mxnet/pull/15952
## Description ##
These PRs introduced an unrelated increase of the CI time limit. This should
be in a separate PR
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new f73a540 Bump the publish
haojin2 commented on issue #15951: Revert "Numpy-compatible concatenate
upstream"
URL: https://github.com/apache/incubator-mxnet/pull/15951#issuecomment-523142933
I don't think that's a reasonable assumption. Firstly this PR passed CI
several times already (since I rebase with master
reminisce commented on issue #15951: Revert "Numpy-compatible concatenate
upstream"
URL: https://github.com/apache/incubator-mxnet/pull/15951#issuecomment-523141806
@marcoabreu Could you elaborate what urges to revert the PR? I am expecting
more time duration since we have been adding a
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a change to branch revert-15894-np_concatenate_master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
at 3afc7c0 Revert "Numpy-compatible concatenate upstream (#15894)"
No new
larroy commented on a change in pull request #15894: Numpy-compatible
concatenate upstream
URL: https://github.com/apache/incubator-mxnet/pull/15894#discussion_r315836185
##
File path: ci/jenkins/Jenkinsfile_unix_cpu
##
@@ -21,7 +21,7 @@
// See documents at
marcoabreu opened a new pull request #15951: Revert "Numpy-compatible
concatenate upstream"
URL: https://github.com/apache/incubator-mxnet/pull/15951
Reverts apache/incubator-mxnet#15894
This PR increased the CI timeout. I assume that it did not pass without
increasing the limit,
apeforest commented on a change in pull request #15948: Fix a memory
misalignment in topk operator
URL: https://github.com/apache/incubator-mxnet/pull/15948#discussion_r315824638
##
File path: 3rdparty/mshadow/mshadow/tensor.h
##
@@ -69,15 +69,15 @@ struct Shape {
*
apeforest commented on a change in pull request #15948: Fix a memory
misalignment in topk operator
URL: https://github.com/apache/incubator-mxnet/pull/15948#discussion_r315824135
##
File path: src/operator/tensor/ordering_op-inl.h
##
@@ -414,30 +414,23 @@ void
apeforest edited a comment on issue #15703: Storage manager / memory usage
regression in v1.5
URL:
https://github.com/apache/incubator-mxnet/issues/15703#issuecomment-523119231
@TaoLv This is not an issue (bug per se) but limitation of int32_t data
types we used in MXNet. As I pointed to
apeforest edited a comment on issue #15703: Storage manager / memory usage
regression in v1.5
URL:
https://github.com/apache/incubator-mxnet/issues/15703#issuecomment-523119231
@TaoLv This is not an issue (bug per se) but limitation of int32_t data
types we used in MXNet. As I pointed to
apeforest commented on issue #15703: Storage manager / memory usage regression
in v1.5
URL:
https://github.com/apache/incubator-mxnet/issues/15703#issuecomment-523119231
@TaoLv This is not an issue (bug per se) but limitation of int32_t data
types we used in MXNet. As I pointed to the
ChaiBapchya commented on a change in pull request #15772: Add symbol api for
randn and fix shape issue for randn ndarray and symbol api
URL: https://github.com/apache/incubator-mxnet/pull/15772#discussion_r315816843
##
File path: mxnet_py3/include/python3.6m
##
@@ -0,0 +1
access2rohit commented on a change in pull request #15899: Typedef cleanup
URL: https://github.com/apache/incubator-mxnet/pull/15899#discussion_r315794983
##
File path: include/mxnet/c_predict_api.h
##
@@ -43,8 +43,6 @@ extern "C" {
/*! \brief manually define unsigned
access2rohit commented on a change in pull request #15899: Typedef cleanup
URL: https://github.com/apache/incubator-mxnet/pull/15899#discussion_r315795007
##
File path: include/mxnet/c_predict_api.h
##
@@ -43,8 +43,6 @@ extern "C" {
/*! \brief manually define unsigned
kshitij12345 commented on a change in pull request #15772: Add symbol api for
randn and fix shape issue for randn ndarray and symbol api
URL: https://github.com/apache/incubator-mxnet/pull/15772#discussion_r315790759
##
File path: mxnet_py3/include/python3.6m
##
@@ -0,0
kshitij12345 commented on issue #15474: [MXNET-978] Higher Order Gradient
Support `sqrt`, `cbrt`.
URL: https://github.com/apache/incubator-mxnet/pull/15474#issuecomment-523094624
@roywei Have retriggered and successfully run.
kshitij12345 commented on issue #15746: [MXNET-978] Higher Order Gradient
Support `clip`, `dropout`.
URL: https://github.com/apache/incubator-mxnet/pull/15746#issuecomment-523094418
@roywei @apeforest Have retriggered and the tests have ran successfully.
leezu commented on issue #12795: Deserialization problem with gluon
`ValueError: There are multiple outputs with name ...`
URL:
https://github.com/apache/incubator-mxnet/issues/12795#issuecomment-523090915
The "Minimum reproducible example" works for me on 1.5 and current master.
This
access2rohit commented on a change in pull request #15948: Fix a memory
misalignment in topk operator
URL: https://github.com/apache/incubator-mxnet/pull/15948#discussion_r315777314
##
File path: src/operator/tensor/ordering_op-inl.h
##
@@ -414,30 +414,23 @@ void
access2rohit commented on a change in pull request #15948: Fix a memory
misalignment in topk operator
URL: https://github.com/apache/incubator-mxnet/pull/15948#discussion_r315777529
##
File path: src/operator/tensor/ordering_op-inl.h
##
@@ -414,30 +414,23 @@ void
access2rohit commented on a change in pull request #15948: Fix a memory
misalignment in topk operator
URL: https://github.com/apache/incubator-mxnet/pull/15948#discussion_r315777314
##
File path: src/operator/tensor/ordering_op-inl.h
##
@@ -414,30 +414,23 @@ void
access2rohit commented on a change in pull request #15948: Fix a memory
misalignment in topk operator
URL: https://github.com/apache/incubator-mxnet/pull/15948#discussion_r315777158
##
File path: 3rdparty/mshadow/mshadow/tensor.h
##
@@ -69,15 +69,15 @@ struct Shape {
ptrendx commented on issue #15545: Softmax optimization for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15545#issuecomment-523070340
Ok, it seems that splitting softmax.cc into 3 files, 1 for each operator
(softmax, softmin and log_softmax) did the trick fortunately.
TaoLv commented on issue #15930: Fix dtype inference in arange_like operator
URL: https://github.com/apache/incubator-mxnet/pull/15930#issuecomment-523068244
@eric-haibin-lin do you think the below code snippet can be used as a test
case?
```python
import mxnet as mx
import
TaoLv commented on issue #15703: Storage manager / memory usage regression in
v1.5
URL:
https://github.com/apache/incubator-mxnet/issues/15703#issuecomment-523037374
@apeforest Thank you for the analysis. What's the blocker to get this issue
fixed on the v1.5.x branch?
ciyongch opened a new pull request #15950: [MKLDNN]Support fullyconnected and
element-wise ops fusion
URL: https://github.com/apache/incubator-mxnet/pull/15950
## Description ##
This PR is to add the support for fullyconnected and some element-wise
(including
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new c798770 Bump the publish
zoeygxy commented on issue #15942: Refines NDArray indexing and adds numpy
ndarray indexing [DO NOT MERGE YET]
URL: https://github.com/apache/incubator-mxnet/pull/15942#issuecomment-523009739
Waiting for CI result. Still fixing style.
zixuanweeei commented on issue #15741: MKL-DNN LBR-GRU Inference Integration
(FP32 LBR-GRU)
URL: https://github.com/apache/incubator-mxnet/pull/15741#issuecomment-522955864
Cherry picked from commit 1cf63e1 according to
yzhliu commented on a change in pull request #15938: Tvm broadcast backward
URL: https://github.com/apache/incubator-mxnet/pull/15938#discussion_r315582881
##
File path: contrib/tvmop/basic/ufunc.py
##
@@ -48,3 +50,71 @@ def vadd_gpu(dtype, ndim):
s[C].bind(bx,
yzhliu commented on a change in pull request #15938: Tvm broadcast backward
URL: https://github.com/apache/incubator-mxnet/pull/15938#discussion_r315584826
##
File path: src/operator/contrib/tvmop/ufunc.cc
##
@@ -37,29 +38,88 @@ namespace op {
static constexpr char
yzhliu commented on a change in pull request #15938: Tvm broadcast backward
URL: https://github.com/apache/incubator-mxnet/pull/15938#discussion_r315583132
##
File path: src/operator/contrib/tvmop/ufunc.cc
##
@@ -37,29 +38,88 @@ namespace op {
static constexpr char
yzhliu commented on a change in pull request #15938: Tvm broadcast backward
URL: https://github.com/apache/incubator-mxnet/pull/15938#discussion_r315585709
##
File path: src/operator/contrib/tvmop/ufunc.cc
##
@@ -37,29 +38,88 @@ namespace op {
static constexpr char
yzhliu commented on a change in pull request #15938: Tvm broadcast backward
URL: https://github.com/apache/incubator-mxnet/pull/15938#discussion_r315579522
##
File path: contrib/tvmop/basic/ufunc.py
##
@@ -48,3 +50,71 @@ def vadd_gpu(dtype, ndim):
s[C].bind(bx,
leezu commented on issue #15703: Storage manager / memory usage regression in
v1.5
URL:
https://github.com/apache/incubator-mxnet/issues/15703#issuecomment-522917231
Thank you for diving deep to find the root cause! I'm not blocked by this
fix having to wait for MXNet 1.6, but we may
pengzhao-intel commented on issue #15853: Float64 fallback for mkldnn subgraph
and rnn op
URL: https://github.com/apache/incubator-mxnet/pull/15853#issuecomment-522908818
@ZhennanQin could you try CI again?
This is an
ElaineBao commented on issue #15884: [WIP] New Website: New Docs [1/3]
URL: https://github.com/apache/incubator-mxnet/pull/15884#issuecomment-522898751
Hi, is it possible to make API docs easier to find ? I have to click many
times to get to the function page: Main page -> Docs & Tutorials
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 9d8c1d9 Bump the publish
samskalicky commented on issue #15921: [WIP] dynamic custom operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-522889129
@wkcn while this PR is not quite done yet, it would be great to get some
early feedback since the design/implementation has changed
gyshi opened a new pull request #15949: Numpy op exp2
URL: https://github.com/apache/incubator-mxnet/pull/15949
this op exp2 test in numpy branch.
(https://www.numpy.org/doc/1.17/reference/generated/numpy.exp2.html)
This
apeforest commented on issue #15948: Fix a memory misalignment in topk operator
URL: https://github.com/apache/incubator-mxnet/pull/15948#issuecomment-522877070
@access2rohit @ChaiBapchya Please also help review. Thanks
This
1 - 100 of 107 matches
Mail list logo