apeforest commented on a change in pull request #17235: [DOC] Add a few tips
for running horovod
URL: https://github.com/apache/incubator-mxnet/pull/17235#discussion_r363619993
##
File path: example/distributed_training-horovod/README.md
##
@@ -199,3 +199,11 @@ $ mpirun
apeforest commented on a change in pull request #17235: [DOC] Add a few tips
for running horovod
URL: https://github.com/apache/incubator-mxnet/pull/17235#discussion_r363619079
##
File path: example/distributed_training-horovod/README.md
##
@@ -199,3 +199,11 @@ $ mpirun
liuzh91 closed issue #17086: [MKLDNN] RNN Op gradient computation is broken
URL: https://github.com/apache/incubator-mxnet/issues/17086
This is an automated message from the Apache Git Service.
To respond to the message,
eric-haibin-lin opened a new pull request #17235: [DOC] Add a few tips for
running horovod
URL: https://github.com/apache/incubator-mxnet/pull/17235
## Description ##
Add some docs. @apeforest @muhyun
## Checklist ##
### Essentials ###
Please feel free to remove
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new c6aa963 Bump the
gengyanlei edited a comment on issue #17224: 关于mxnet.gluon的一些建议(使用方面)
URL:
https://github.com/apache/incubator-mxnet/issues/17224#issuecomment-571453654
@wkcn 主要数据分发之后,预测结果需要像上面代码一样,对结果还需要遍历计算,不利于后续的相关操作,例如精度计算等。
yajiedesign commented on issue #17218: mxnet.ndarray.from_numpy() throws error
for float16 dtype
URL:
https://github.com/apache/incubator-mxnet/issues/17218#issuecomment-571454004
@wkcn hi.now the old windows build system has been borken.We will move the
night release to S3, which is
gengyanlei commented on issue #17224: 关于mxnet.gluon的一些建议(使用方面)
URL:
https://github.com/apache/incubator-mxnet/issues/17224#issuecomment-571453654
@wkcn 主要数据分发之后,预测结果需要像上面代码一样,对结果还需要遍历计算,不利于后续的相关操作,例如精度计算等。
Tommliu opened a new pull request #17234: Op Quantile/Percentile [Numpy]
URL: https://github.com/apache/incubator-mxnet/pull/17234
## Description ##
Numpy operator Quantile and Percentile. @#16896
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable
leezu commented on issue #17098: Disable OpenMP offloading support for
3rdparty/openmp
URL: https://github.com/apache/incubator-mxnet/pull/17098#issuecomment-571444598
The reason for using a tmpdir is that multiple jobs are executed on the same
Windows CI server. In an earlier stage of
xidulu opened a new pull request #17233: [Numpy] Add broadcast_to scalar case
URL: https://github.com/apache/incubator-mxnet/pull/17233
## Description ##
As title
https://github.com/apache/incubator-mxnet/issues/17223
## Checklist ##
### Essentials ###
Please feel free to
leezu commented on issue #17098: Disable OpenMP offloading support for
3rdparty/openmp
URL: https://github.com/apache/incubator-mxnet/pull/17098#issuecomment-571441803
@samskalicky above lines are a temporary workaround until the Base Windows
AMI used by CI is updated. @larroy may be able
samskalicky edited a comment on issue #17098: Disable OpenMP offloading support
for 3rdparty/openmp
URL: https://github.com/apache/incubator-mxnet/pull/17098#issuecomment-571439534
@leezu im getting this error in my windows CI runs:
```
Traceback (most recent call last):
File
samskalicky commented on issue #17098: Disable OpenMP offloading support for
3rdparty/openmp
URL: https://github.com/apache/incubator-mxnet/pull/17098#issuecomment-571439534
@leezu im getting this error in my windows CI runs:
```
Traceback (most recent call last):
File
pribadihcr commented on issue #15646: TensorRT protobuf sizelimit and subgraph
error
URL:
https://github.com/apache/incubator-mxnet/issues/15646#issuecomment-571418418
+1
This is an automated message from the Apache Git
tonyng1707 opened a new issue #17232: How to build conda package on mxnet-cu101
(version 1.5.1)
URL: https://github.com/apache/incubator-mxnet/issues/17232
I use 'conda skeleton pypi' to build Conda package on mxnet-cu101; however
exception 'could not find unpacked source dir' occurs. It
ciyongch commented on issue #17217: fix latency calculation and print issue
URL: https://github.com/apache/incubator-mxnet/pull/17217#issuecomment-571410634
> How about changing `0 instance use 0-27 cores and 0 mem with
BENCHMARK_0.log` to `Instance 0 uses core 0-27 and mem 0:
samskalicky commented on a change in pull request #17034: Dynamic subgraph
property
URL: https://github.com/apache/incubator-mxnet/pull/17034#discussion_r363558616
##
File path: example/extensions/lib_subgraph/test_subgraph.py
##
@@ -39,22 +39,38 @@
b = mx.sym.var('b')
wkcn commented on issue #17224: 可以将gluon中数据分发
合成么,像pytorch那样,并且.step()不需要传入batch,以及损失函数可以加reduce么?使用起来不太方便,不过比symbol好用多了,symbol不好记录loss
URL:
https://github.com/apache/incubator-mxnet/issues/17224#issuecomment-571399148
@gengyanlei
1. 数据分发
目前好像还没有类似`nn.DataParallel`的API,需要自己封装一下。
TaoLv commented on issue #17217: fix latency calculation and print issue
URL: https://github.com/apache/incubator-mxnet/pull/17217#issuecomment-571398199
How about changing `0 instance use 0-27 cores and 0 mem with
BENCHMARK_0.log` to `Instance 0 uses core 0-27 and mem 0: BENCHMARK_0.log`?
pengzhao-intel commented on issue #17224: 可以将gluon中数据分发
合成么,像pytorch那样,并且.step()不需要传入batch,以及损失函数可以加reduce么?使用起来不太方便,不过比symbol好用多了,symbol不好记录loss
URL:
https://github.com/apache/incubator-mxnet/issues/17224#issuecomment-571397502
@gengyanlei 可以把标题改短一点嘛:)
samskalicky commented on issue #17034: Dynamic subgraph property
URL: https://github.com/apache/incubator-mxnet/pull/17034#issuecomment-571397143
Thanks for the review @eric-haibin-lin! ive made changes based on your
feedback, updated the PR description with "Next Steps" for some todo
samskalicky commented on a change in pull request #17034: Dynamic subgraph
property
URL: https://github.com/apache/incubator-mxnet/pull/17034#discussion_r363562382
##
File path: src/c_api/c_api.cc
##
@@ -705,6 +719,57 @@ int MXLoadLib(const char *path) {
}
samskalicky commented on a change in pull request #17034: Dynamic subgraph
property
URL: https://github.com/apache/incubator-mxnet/pull/17034#discussion_r363562140
##
File path: src/operator/subgraph/partitioner/custom_subgraph_property.h
##
@@ -181,18 +186,28 @@ class
This is an automated email from the ASF dual-hosted git repository.
taolv pushed a commit to branch v1.6.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/v1.6.x by this push:
new ad1ff3a [v1.6.x] Cherry-pick MKL-DNN
gengyanlei commented on issue #17224: 可以将gluon中数据分发
合成么,像pytorch那样,并且.step()不需要传入batch,以及损失函数可以加reduce么?使用起来不太方便,不过比symbol好用多了,symbol不好记录loss
URL:
https://github.com/apache/incubator-mxnet/issues/17224#issuecomment-571396357
TaoLv merged pull request #17225: [v1.6.x] Cherry-pick MKL-DNN Rnn operator
enhancements to v1.6.x
URL: https://github.com/apache/incubator-mxnet/pull/17225
This is an automated message from the Apache Git Service.
To
wkcn commented on a change in pull request #17230: Additional fix for vector
access.
URL: https://github.com/apache/incubator-mxnet/pull/17230#discussion_r363559546
##
File path: 3rdparty/mshadow/mshadow/dot_engine-inl.h
##
@@ -421,12 +421,9 @@ struct BLASEngine {
wuxun-zhang commented on issue #17231: cannot quantization example
URL:
https://github.com/apache/incubator-mxnet/issues/17231#issuecomment-571396002
@zhhoper From my side, I cannot reproduce this issue with latest master in
my local machine. May I know which mxnet version (commit id) do
samskalicky commented on a change in pull request #17034: Dynamic subgraph
property
URL: https://github.com/apache/incubator-mxnet/pull/17034#discussion_r363557990
##
File path: example/extensions/lib_subgraph/test_subgraph.py
##
@@ -39,22 +39,38 @@
b = mx.sym.var('b')
samskalicky commented on a change in pull request #17034: Dynamic subgraph
property
URL: https://github.com/apache/incubator-mxnet/pull/17034#discussion_r363561106
##
File path: example/extensions/lib_subgraph/subgraph_lib.cc
##
@@ -0,0 +1,250 @@
+/*
+ * Licensed to the
gengyanlei commented on issue #17224: 可以将gluon中数据分发
合成么,像pytorch那样,并且.step()不需要传入batch,以及损失函数可以加reduce么?使用起来不太方便,不过比symbol好用多了,symbol不好记录loss
URL:
https://github.com/apache/incubator-mxnet/issues/17224#issuecomment-571394479
mxnet
wkcn commented on a change in pull request #17230: Additional fix for vector
access.
URL: https://github.com/apache/incubator-mxnet/pull/17230#discussion_r363559546
##
File path: 3rdparty/mshadow/mshadow/dot_engine-inl.h
##
@@ -421,12 +421,9 @@ struct BLASEngine {
wkcn commented on a change in pull request #17230: Additional fix for vector
access.
URL: https://github.com/apache/incubator-mxnet/pull/17230#discussion_r363559546
##
File path: 3rdparty/mshadow/mshadow/dot_engine-inl.h
##
@@ -421,12 +421,9 @@ struct BLASEngine {
samskalicky commented on a change in pull request #17034: Dynamic subgraph
property
URL: https://github.com/apache/incubator-mxnet/pull/17034#discussion_r363558616
##
File path: example/extensions/lib_subgraph/test_subgraph.py
##
@@ -39,22 +39,38 @@
b = mx.sym.var('b')
samskalicky commented on a change in pull request #17034: Dynamic subgraph
property
URL: https://github.com/apache/incubator-mxnet/pull/17034#discussion_r363558404
##
File path: include/mxnet/lib_api.h
##
@@ -590,6 +592,52 @@ class CustomOp {
bool isSGop;
};
+/*!
samskalicky commented on a change in pull request #17034: Dynamic subgraph
property
URL: https://github.com/apache/incubator-mxnet/pull/17034#discussion_r363558320
##
File path: example/extensions/lib_subgraph/Makefile
##
@@ -0,0 +1,24 @@
+# Licensed to the Apache
samskalicky commented on a change in pull request #17034: Dynamic subgraph
property
URL: https://github.com/apache/incubator-mxnet/pull/17034#discussion_r363558404
##
File path: include/mxnet/lib_api.h
##
@@ -590,6 +592,52 @@ class CustomOp {
bool isSGop;
};
+/*!
samskalicky commented on a change in pull request #17034: Dynamic subgraph
property
URL: https://github.com/apache/incubator-mxnet/pull/17034#discussion_r363558190
##
File path: example/extensions/lib_subgraph/subgraph_lib.cc
##
@@ -0,0 +1,250 @@
+/*
+ * Licensed to the
samskalicky commented on a change in pull request #17034: Dynamic subgraph
property
URL: https://github.com/apache/incubator-mxnet/pull/17034#discussion_r363557990
##
File path: example/extensions/lib_subgraph/test_subgraph.py
##
@@ -39,22 +39,38 @@
b = mx.sym.var('b')
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 0540e33 Bump the
ciyongch commented on issue #17217: fix latency calculation and print issue
URL: https://github.com/apache/incubator-mxnet/pull/17217#issuecomment-571378516
> @ciyongch Could you please paste the refined log message here?
The updated message looks like this:
```
target machine
ptrendx closed pull request #15044: [WIP] Added fused slice sum operator
URL: https://github.com/apache/incubator-mxnet/pull/15044
This is an automated message from the Apache Git Service.
To respond to the message, please
ptrendx commented on issue #15044: [WIP] Added fused slice sum operator
URL: https://github.com/apache/incubator-mxnet/pull/15044#issuecomment-571377763
This PR is no longer needed due to pointwise fusion.
This is an
ptrendx commented on issue #15054: [WIP] Adds a fused split, bias, activation
and multiplication operator
URL: https://github.com/apache/incubator-mxnet/pull/15054#issuecomment-571377572
This PR is no longer needed due to pointwise fusion support.
ptrendx closed pull request #15054: [WIP] Adds a fused split, bias, activation
and multiplication operator
URL: https://github.com/apache/incubator-mxnet/pull/15054
This is an automated message from the Apache Git Service.
anirudh2290 commented on issue #16654: Multithreaded Inference Support
URL: https://github.com/apache/incubator-mxnet/pull/16654#issuecomment-571374844
@zachgk @larroy Thanks for the review. I have addressed it.
This is an
ZhennanQin commented on issue #17231: cannot quantization example
URL:
https://github.com/apache/incubator-mxnet/issues/17231#issuecomment-571374482
@zhhoper Thanks for the information. Will investigate this soon.
This is an
zhhoper commented on issue #17231: cannot quantization example
URL:
https://github.com/apache/incubator-mxnet/issues/17231#issuecomment-571373823
@ZhennanQin I tried to set calib-mode to 'naive', met the same error. Error
message as follows
INFO:logger:Namespace(batch_size=32,
ZhennanQin commented on issue #17231: cannot quantization example
URL:
https://github.com/apache/incubator-mxnet/issues/17231#issuecomment-571372981
@eric-haibin-lin Thanks for reporting this. May I know if calibration=naive
will crash or not?
eric-haibin-lin commented on issue #17231: cannot quantization example
URL:
https://github.com/apache/incubator-mxnet/issues/17231#issuecomment-571372115
@PatricZhao @TaoLv @ZhennanQin
This is an automated message from the
eric-haibin-lin commented on issue #17220: Cannot access child blocks'
parameter in parent block's hybrid_forward
URL:
https://github.com/apache/incubator-mxnet/issues/17220#issuecomment-571371930
Shall we also consider the case where input is a symbol so the `ctx` is not
visible to
zhhoper opened a new issue #17231: cannot quantization example
URL: https://github.com/apache/incubator-mxnet/issues/17231
## Description
(A clear and concise description of what the bug is.)
I try to run quantization example:
python imagenet_gen_qsym_mkldnn.py and met the
eric-haibin-lin commented on a change in pull request #17034: Dynamic subgraph
property
URL: https://github.com/apache/incubator-mxnet/pull/17034#discussion_r363532852
##
File path: src/c_api/c_api.cc
##
@@ -705,6 +719,57 @@ int MXLoadLib(const char *path) {
}
eric-haibin-lin commented on a change in pull request #17034: Dynamic subgraph
property
URL: https://github.com/apache/incubator-mxnet/pull/17034#discussion_r363531177
##
File path: example/extensions/lib_subgraph/subgraph_lib.cc
##
@@ -0,0 +1,250 @@
+/*
+ * Licensed to
eric-haibin-lin commented on a change in pull request #17034: Dynamic subgraph
property
URL: https://github.com/apache/incubator-mxnet/pull/17034#discussion_r363532165
##
File path: example/extensions/lib_subgraph/test_subgraph.py
##
@@ -39,22 +39,38 @@
b =
eric-haibin-lin commented on a change in pull request #17034: Dynamic subgraph
property
URL: https://github.com/apache/incubator-mxnet/pull/17034#discussion_r363537769
##
File path: example/extensions/lib_subgraph/Makefile
##
@@ -0,0 +1,24 @@
+# Licensed to the Apache
eric-haibin-lin commented on a change in pull request #17034: Dynamic subgraph
property
URL: https://github.com/apache/incubator-mxnet/pull/17034#discussion_r363534787
##
File path: include/mxnet/lib_api.h
##
@@ -590,6 +592,52 @@ class CustomOp {
bool isSGop;
};
eric-haibin-lin commented on a change in pull request #17034: Dynamic subgraph
property
URL: https://github.com/apache/incubator-mxnet/pull/17034#discussion_r363539157
##
File path: example/extensions/lib_subgraph/test_subgraph.py
##
@@ -39,22 +39,38 @@
b =
eric-haibin-lin commented on a change in pull request #17034: Dynamic subgraph
property
URL: https://github.com/apache/incubator-mxnet/pull/17034#discussion_r363538759
##
File path: example/extensions/lib_subgraph/subgraph_lib.cc
##
@@ -0,0 +1,250 @@
+/*
+ * Licensed to
OliverColeman commented on issue #17218: mxnet.ndarray.from_numpy() throws
error for float16 dtype
URL:
https://github.com/apache/incubator-mxnet/issues/17218#issuecomment-571365444
That got it, thanks :)
This is an
aws-taylor opened a new pull request #17230: Additional fix for vector access.
URL: https://github.com/apache/incubator-mxnet/pull/17230
See
https://github.com/apache/incubator-mxnet/commit/9634786f96388004f68c223d72e120ad425c2f12
for the original.
## Description ##
Fixes an
anirudh2290 commented on a change in pull request #16654: Multithreaded
Inference Support
URL: https://github.com/apache/incubator-mxnet/pull/16654#discussion_r363527518
##
File path: ci/docker/runtime_functions.sh
##
@@ -1353,6 +1373,19 @@
haojin2 commented on issue #17208: gather_nd: check bound and wrap negative
indices
URL: https://github.com/apache/incubator-mxnet/pull/17208#issuecomment-571351466
Maybe also add some tests to verify this new checking.
This
haojin2 commented on a change in pull request #17208: gather_nd: check bound
and wrap negative indices
URL: https://github.com/apache/incubator-mxnet/pull/17208#discussion_r363521450
##
File path: src/operator/tensor/indexing_op.cu
##
@@ -437,6 +436,98 @@ inline void
leezu commented on a change in pull request #17214: [tvmop] support cuda
multi-arch compilation
URL: https://github.com/apache/incubator-mxnet/pull/17214#discussion_r363514571
##
File path: contrib/tvmop/compile.py
##
@@ -105,18 +105,13 @@ def get_cuda_arch(arch):
yzhliu commented on a change in pull request #17214: [tvmop] support cuda
multi-arch compilation
URL: https://github.com/apache/incubator-mxnet/pull/17214#discussion_r363509581
##
File path: contrib/tvmop/compile.py
##
@@ -105,18 +105,13 @@ def get_cuda_arch(arch):
This is an automated email from the ASF dual-hosted git repository.
ptrendx pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 1612533 refactor gluon.utils.split_data() following np.array_split()
(#17123)
add bd7eedf Fix
ptrendx closed issue #17164: net.Cast("float16") doesn't work: Check failed:
(*in_type)[i] == dtype_param (2 vs. 0) : This layer requires uniform type.
Expected 'float32' v.s. given 'float16' at 'gamma'
URL: https://github.com/apache/incubator-mxnet/issues/17164
ptrendx merged pull request #17212: Fix #17164 symbolblock with BatchNorm
inside during cast to fp16
URL: https://github.com/apache/incubator-mxnet/pull/17212
This is an automated message from the Apache Git Service.
To
haojin2 commented on issue #16914: [Numpy] Implement atleast_1d, atleast_2d,
atleast_3d
URL: https://github.com/apache/incubator-mxnet/pull/16914#issuecomment-571292703
#17099 merged, closing this one.
This is an automated
haojin2 closed pull request #16914: [Numpy] Implement atleast_1d, atleast_2d,
atleast_3d
URL: https://github.com/apache/incubator-mxnet/pull/16914
This is an automated message from the Apache Git Service.
To respond to the
This is an automated email from the ASF dual-hosted git repository.
haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 83a23b0 MKL-DNN RNN backward path enhancement (#17183)
add 1612533 refactor gluon.utils.split_data()
haojin2 merged pull request #17123: refactor gluon.utils.split_data() following
np.array_split()
URL: https://github.com/apache/incubator-mxnet/pull/17123
This is an automated message from the Apache Git Service.
To respond
haojin2 closed issue #17117: Problem with gluon.utils.split_data()
URL: https://github.com/apache/incubator-mxnet/issues/17117
This is an automated message from the Apache Git Service.
To respond to the message, please log
leezu opened a new pull request #17229: Fix and clean up Ubuntu build from
source instructions
URL: https://github.com/apache/incubator-mxnet/pull/17229
## Description ##
Fix and clean up Ubuntu build from source instructions.
Fixes
reminisce commented on a change in pull request #17214: [tvmop] support cuda
multi-arch compilation
URL: https://github.com/apache/incubator-mxnet/pull/17214#discussion_r363446841
##
File path: contrib/tvmop/compile.py
##
@@ -105,18 +105,13 @@ def get_cuda_arch(arch):
leezu commented on issue #17103: R installation instructions out of date
URL:
https://github.com/apache/incubator-mxnet/issues/17103#issuecomment-571275838
Would you mind taking a look at
https://github.com/apache/incubator-mxnet/pull/17228 ? It should fix your
issue, but I'm not an R
leezu commented on a change in pull request #17214: [tvmop] support cuda
multi-arch compilation
URL: https://github.com/apache/incubator-mxnet/pull/17214#discussion_r363443020
##
File path: contrib/tvmop/compile.py
##
@@ -105,18 +105,13 @@ def get_cuda_arch(arch):
leezu opened a new pull request #17228: Support R-package with cmake build and
fix installation instructions
URL: https://github.com/apache/incubator-mxnet/pull/17228
## Description ##
Support R-package with cmake build and fix installation instructions
Also: Fixes
yzhliu commented on a change in pull request #17214: [tvmop] support cuda
multi-arch compilation
URL: https://github.com/apache/incubator-mxnet/pull/17214#discussion_r363440181
##
File path: contrib/tvmop/compile.py
##
@@ -105,18 +105,13 @@ def get_cuda_arch(arch):
rondogency commented on a change in pull request #17204: Enhancements for
MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17204#discussion_r363440140
##
File path: include/mxnet/lib_api.h
##
@@ -277,6 +289,14 @@ struct MXTensor {
reminisce commented on a change in pull request #17208: gather_nd: check bound
and wrap negative indices
URL: https://github.com/apache/incubator-mxnet/pull/17208#discussion_r363435292
##
File path: src/operator/tensor/indexing_op.cc
##
@@ -435,6 +435,63 @@ inline void
reminisce commented on a change in pull request #17208: gather_nd: check bound
and wrap negative indices
URL: https://github.com/apache/incubator-mxnet/pull/17208#discussion_r363435410
##
File path: src/operator/tensor/indexing_op.cu
##
@@ -437,6 +436,98 @@ inline void
TEChopra1000 opened a new pull request #17227: autograd video and image link
fixes and removing symbol tutorials
URL: https://github.com/apache/incubator-mxnet/pull/17227
* Fixing autograd image and video links
* removed reference to symbol tutorials - they have been depreciated.
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 324b9dd Bump the
leezu opened a new issue #17226: Compile from Source instructions re-use CI
install scripts
URL: https://github.com/apache/incubator-mxnet/issues/17226
## Description
The "compile from source" instructions re-use CI install scripts. But the CI
scripts contain hacky modifications that
leezu commented on a change in pull request #17214: [tvmop] support cuda
multi-arch compilation
URL: https://github.com/apache/incubator-mxnet/pull/17214#discussion_r363406763
##
File path: contrib/tvmop/compile.py
##
@@ -105,18 +105,13 @@ def get_cuda_arch(arch):
leezu commented on a change in pull request #17214: [tvmop] support cuda
multi-arch compilation
URL: https://github.com/apache/incubator-mxnet/pull/17214#discussion_r363406763
##
File path: contrib/tvmop/compile.py
##
@@ -105,18 +105,13 @@ def get_cuda_arch(arch):
leezu commented on a change in pull request #17214: [tvmop] support cuda
multi-arch compilation
URL: https://github.com/apache/incubator-mxnet/pull/17214#discussion_r363406763
##
File path: contrib/tvmop/compile.py
##
@@ -105,18 +105,13 @@ def get_cuda_arch(arch):
eric-haibin-lin commented on a change in pull request #16893: Multi-tensor LAMB
URL: https://github.com/apache/incubator-mxnet/pull/16893#discussion_r363403135
##
File path: python/mxnet/optimizer/optimizer.py
##
@@ -1259,53 +1260,121 @@ def __init__(self,
eric-haibin-lin commented on a change in pull request #16893: Multi-tensor LAMB
URL: https://github.com/apache/incubator-mxnet/pull/16893#discussion_r363403751
##
File path: src/operator/contrib/multi_lamb.cc
##
@@ -0,0 +1,246 @@
+/*
+ * Licensed to the Apache Software
schliffen commented on issue #12861: RuntimeError: Cannot find the MXNet
library.
URL:
https://github.com/apache/incubator-mxnet/issues/12861#issuecomment-571234345
I encountered the same problem, Has anyone solved it?
Alicia1529 commented on a change in pull request #17222: fix flaky test:
boolean index and fix bugs
URL: https://github.com/apache/incubator-mxnet/pull/17222#discussion_r363392042
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -1296,10 +1296,9 @@ def
Kh4L commented on a change in pull request #17222: fix flaky test: boolean
index and fix bugs
URL: https://github.com/apache/incubator-mxnet/pull/17222#discussion_r363379075
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -1296,10 +1296,9 @@ def
yzhliu commented on a change in pull request #17214: [tvmop] support cuda
multi-arch compilation
URL: https://github.com/apache/incubator-mxnet/pull/17214#discussion_r363364077
##
File path: CMakeLists.txt
##
@@ -758,11 +759,15 @@ if(USE_DIST_KVSTORE)
list(APPEND
zixuanweeei opened a new pull request #17225: [v1.6.x] Cherry-pick MKL-DNN Rnn
operator enhancements to v1.6.x
URL: https://github.com/apache/incubator-mxnet/pull/17225
## Description ##
Current v1.6.x branch contains some bugs in MKL-DNN Rnn operators that will
cause gradients
Kh4L commented on issue #16881: Add TypeFlag=>string macro
URL: https://github.com/apache/incubator-mxnet/pull/16881#issuecomment-571167217
@haojin2 that's right, as @wkcn , I moved it to mshadow for consistecy,
where most of dtype routines are defined.
TaoLv commented on issue #17217: fix latency calculation and print issue
URL: https://github.com/apache/incubator-mxnet/pull/17217#issuecomment-571165743
And please re-trigger CI :(
This is an automated message from the
TaoLv commented on issue #17217: fix latency calculation and print issue
URL: https://github.com/apache/incubator-mxnet/pull/17217#issuecomment-571165574
@ciyongch Could you please paste the refined log message here?
This is
1 - 100 of 117 matches
Mail list logo