tianlanli commented on issue #16516: mxnet 0.10.0 compile fail
URL:
https://github.com/apache/incubator-mxnet/issues/16516#issuecomment-543016458
> `make -j8 USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1
USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1 > build.log 2>&1`
>
> then :
>
>
reminisce commented on a change in pull request #16513: Bug fix for the input
of same axes of the swapaxes operator
URL: https://github.com/apache/incubator-mxnet/pull/16513#discussion_r335815662
##
File path: tests/python/unittest/test_operator.py
##
@@ -716,6 +716,23 @@
access2rohit commented on issue #16409: added support for large tensors for
Dropout operator and tests to verify support for more operators
URL: https://github.com/apache/incubator-mxnet/pull/16409#issuecomment-543007851
> is there a comma missing in the title ?
>
> 1 . added
access2rohit edited a comment on issue #16409: added support for large tensors
for Dropout operator and tests to verify support for more operators
URL: https://github.com/apache/incubator-mxnet/pull/16409#issuecomment-543007851
> is there a comma missing in the title ?
>
> 1 . added
access2rohit commented on a change in pull request #16409: added support for
large tensors in Dropout tests to verify support for more operators
URL: https://github.com/apache/incubator-mxnet/pull/16409#discussion_r335813291
##
File path: tests/nightly/test_large_array.py
tianlanli opened a new issue #16516: mxnet 0.10.0 compile fail
URL: https://github.com/apache/incubator-mxnet/issues/16516
`make -j8 USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1
USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1 > build.log 2>&1`
then :
mxnet-label-bot commented on issue #16516: mxnet 0.10.0 compile fail
URL:
https://github.com/apache/incubator-mxnet/issues/16516#issuecomment-543004679
Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest some labels so
that the appropriate
eric-haibin-lin commented on issue #16234: [MXNET-1426] Fix the wrong result of
sum, mean, argmin, argmax when inputs contain inf or nan
URL: https://github.com/apache/incubator-mxnet/pull/16234#issuecomment-542996385
Would you mind also add what is the result before this fix?
eric-haibin-lin commented on a change in pull request #16408: Add MXNet Ops for
fast multihead attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#discussion_r335803864
##
File path: src/operator/contrib/transformer.cc
##
@@ -29,6 +29,231 @@
namespace
eric-haibin-lin commented on a change in pull request #16408: Add MXNet Ops for
fast multihead attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#discussion_r335803364
##
File path: src/operator/contrib/transformer.cc
##
@@ -29,6 +29,231 @@
namespace
alphashi opened a new issue #16515: MXNet C++Interface, run
image-classification-predict.cc example cannot exits normally
URL: https://github.com/apache/incubator-mxnet/issues/16515
When I run the MXNet C++Interface example
tuanzhangCS commented on issue #16110: ndarray treated uint8 as signed value
URL:
https://github.com/apache/incubator-mxnet/issues/16110#issuecomment-542979694
And you can try this case, it return correct result.
```
import mxnet as mx
arr = mx.nd.ones((2, 2), dtype='uint8')
gyshi commented on a change in pull request #16129: [Numpy] . add op linalg
norm
URL: https://github.com/apache/incubator-mxnet/pull/16129#discussion_r335793559
##
File path: python/mxnet/ndarray/numpy/linalg.py
##
@@ -18,5 +18,185 @@
"""Namespace for operators used in
tuanzhangCS commented on issue #16110: ndarray treated uint8 as signed value
URL:
https://github.com/apache/incubator-mxnet/issues/16110#issuecomment-542976704
Hi @dwSun , I think maybe it's not a bug.
Because the type of arr2 is uint8, and the data type of arr2.sum() is also
uint8.
access2rohit commented on a change in pull request #16476: added large tensor
support for add_n and tests for more ops
URL: https://github.com/apache/incubator-mxnet/pull/16476#discussion_r335790069
##
File path: tests/nightly/test_large_array.py
##
@@ -1199,6 +1200,57 @@
access2rohit commented on a change in pull request #16476: added large tensor
support for add_n and tests for more ops
URL: https://github.com/apache/incubator-mxnet/pull/16476#discussion_r335789956
##
File path: tests/nightly/test_large_vector.py
##
@@ -708,6 +709,57 @@
access2rohit commented on a change in pull request #16476: added large tensor
support for add_n and tests for more ops
URL: https://github.com/apache/incubator-mxnet/pull/16476#discussion_r335789995
##
File path: src/operator/tensor/elemwise_sum.h
##
@@ -64,7 +64,7 @@
access2rohit commented on a change in pull request #16476: added large tensor
support for add_n and tests for more ops
URL: https://github.com/apache/incubator-mxnet/pull/16476#discussion_r335789956
##
File path: tests/nightly/test_large_vector.py
##
@@ -708,6 +709,57 @@
anirudh2290 commented on a change in pull request #16476: added large tensor
support for add_n and tests for more ops
URL: https://github.com/apache/incubator-mxnet/pull/16476#discussion_r335781449
##
File path: src/operator/tensor/elemwise_sum.h
##
@@ -64,7 +64,7 @@ void
anirudh2290 commented on a change in pull request #16476: added large tensor
support for add_n and tests for more ops
URL: https://github.com/apache/incubator-mxnet/pull/16476#discussion_r335783253
##
File path: tests/nightly/test_large_vector.py
##
@@ -708,6 +709,57 @@
marcoabreu commented on issue #16490: Correct Google Analytics Tracker
URL: https://github.com/apache/incubator-mxnet/pull/16490#issuecomment-542963485
Yeah agree, would be good to have these settings centralized
This is an
anirudh2290 commented on a change in pull request #16476: added large tensor
support for add_n and tests for more ops
URL: https://github.com/apache/incubator-mxnet/pull/16476#discussion_r335781266
##
File path: tests/nightly/test_large_array.py
##
@@ -1199,6 +1200,57 @@
access2rohit commented on issue #16476: added large tensor support for add_n
and tests for more ops
URL: https://github.com/apache/incubator-mxnet/pull/16476#issuecomment-542961919
@sxjscience @apeforest @anirudh2290 @zheng-da @pengzhao-intel This PR is
ready for review
access2rohit commented on issue #16371: Added large tensor support and test for
gather_nd
URL: https://github.com/apache/incubator-mxnet/pull/16371#issuecomment-542961360
@sxjscience @apeforest @anirudh2290 @zheng-da @pengzhao-intel This PR is
ready for review
access2rohit commented on issue #16409: added support for large tensors in
Dropout tests to verify support for more operators
URL: https://github.com/apache/incubator-mxnet/pull/16409#issuecomment-542961446
@sxjscience @apeforest @anirudh2290 @zheng-da @pengzhao-intel This PR is
ready for
access2rohit commented on issue #15126: [MXNET-1407] Added test to verify Large
Tensor Support for pad operator
URL: https://github.com/apache/incubator-mxnet/pull/15126#issuecomment-542961141
@sxjscience @apeforest @anirudh2290 @zheng-da @pengzhao-intel This PR is
ready for review
sojiadeshina commented on issue #16496: fix missing docs due to git add issues
URL: https://github.com/apache/incubator-mxnet/pull/16496#issuecomment-542957851
@aaronmarkham can we restart unix-gpu?
This is an automated
rondogency commented on issue #15921: dynamic custom operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-542954858
@szha we can also copy the implementation of dmlc::registry & dlpack to
lib_api.h, but it doesn't solve the diverge problem you described.
anirudh2290 commented on a change in pull request #16416: Dgl ops 2
URL: https://github.com/apache/incubator-mxnet/pull/16416#discussion_r335773199
##
File path: tests/nightly/test_large_array.py
##
@@ -1199,6 +1185,118 @@ def test_full():
assert a[-1][-1] == 3
anirudh2290 commented on issue #16497: Large Vector tests for DGL Ops Part 2
URL: https://github.com/apache/incubator-mxnet/pull/16497#issuecomment-542953179
Can you also address :
https://github.com/apache/incubator-mxnet/pull/16104#discussion_r333275164
rondogency commented on issue #15921: dynamic custom operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-542951984
@szha Because we want to make lib_api.h self-contained without any
dependencies. The end goal is that the user will only copy lib_api.h to
zixuanweeei commented on issue #16503: [mkldnn-v1.0] Enable mkldnn cpp-test,
copy op, concat op
URL: https://github.com/apache/incubator-mxnet/pull/16503#issuecomment-542950070
> @zixuanweeei please retrigger CI again, seems the community resolved part
of CI issues.
Sure. The CI
eric-haibin-lin commented on a change in pull request #15921: dynamic custom
operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#discussion_r335769973
##
File path: example/extensions/lib_custom_op/gemm_lib.cc
##
@@ -0,0 +1,233 @@
+/*
+ * Licensed
pengzhao-intel commented on issue #16503: [mkldnn-v1.0] Enable mkldnn cpp-test,
copy op, concat op
URL: https://github.com/apache/incubator-mxnet/pull/16503#issuecomment-542948917
@zixuanweeei please retrigger CI again, seems the community resolved part
of CI issues.
pengzhao-intel commented on issue #16470: [mkldnn-1.0] add skipped case for
mkldnn_v1.0
URL: https://github.com/apache/incubator-mxnet/pull/16470#issuecomment-542948715
@rongzha1 please take a look CI again, seems the community resolved part of
CI issues.
rondogency commented on a change in pull request #15921: dynamic custom
operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#discussion_r335767549
##
File path: example/extensions/lib_custom_op/gemm_lib.cc
##
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new d539955 Bump the
anirudh2290 commented on a change in pull request #16497: Large Vector tests
for DGL Ops Part 2
URL: https://github.com/apache/incubator-mxnet/pull/16497#discussion_r335765602
##
File path: tests/nightly/test_large_vector.py
##
@@ -708,6 +708,111 @@ def test_full():
access2rohit commented on issue #16477: added more tests to verify support for
large vector
URL: https://github.com/apache/incubator-mxnet/pull/16477#issuecomment-542944450
test_large_vector.test_slice ... ok
test_large_vector.test_ndarray_zeros ... ok
access2rohit commented on a change in pull request #16416: Dgl ops 2
URL: https://github.com/apache/incubator-mxnet/pull/16416#discussion_r335764636
##
File path: tests/nightly/test_large_array.py
##
@@ -1199,6 +1185,118 @@ def test_full():
assert a[-1][-1] == 3
access2rohit commented on a change in pull request #16497: Large Vector tests
for DGL Ops Part 2
URL: https://github.com/apache/incubator-mxnet/pull/16497#discussion_r335764541
##
File path: tests/nightly/test_large_vector.py
##
@@ -708,6 +708,111 @@ def test_full():
aaronmarkham opened a new pull request #16514: add binary and docs build
command options to devmenu
URL: https://github.com/apache/incubator-mxnet/pull/16514
## Description ##
This PR will make it easier for users/contributors to build MXNet from
source and then build any documentation
xiezhq-hermann edited a comment on issue #16513: Bug fix for the input of same
axes of the swapaxes operator
URL: https://github.com/apache/incubator-mxnet/pull/16513#issuecomment-542942167
> Thanks for your contribution!
> Could you please add a testcase in
>
>
xiezhq-hermann opened a new pull request #16513: Bug fix for the input of same
axes of the swapaxes operator
URL: https://github.com/apache/incubator-mxnet/pull/16513
## Description ##
Bug fix for the input of same axes of the swapaxes operator
## Checklist ##
### Essentials
xiezhq-hermann closed pull request #16513: Bug fix for the input of same axes
of the swapaxes operator
URL: https://github.com/apache/incubator-mxnet/pull/16513
This is an automated message from the Apache Git Service.
To
xiezhq-hermann commented on issue #16513: Bug fix for the input of same axes of
the swapaxes operator
URL: https://github.com/apache/incubator-mxnet/pull/16513#issuecomment-542942167
> Thanks for your contribution!
> Could you please add a testcase in
>
>
anirudh2290 commented on a change in pull request #16416: Dgl ops 2
URL: https://github.com/apache/incubator-mxnet/pull/16416#discussion_r335761376
##
File path: tests/nightly/test_large_array.py
##
@@ -1199,6 +1185,118 @@ def test_full():
assert a[-1][-1] == 3
anirudh2290 commented on a change in pull request #16416: Dgl ops 2
URL: https://github.com/apache/incubator-mxnet/pull/16416#discussion_r335761461
##
File path: tests/nightly/test_large_array.py
##
@@ -1199,6 +1185,118 @@ def test_full():
assert a[-1][-1] == 3
anirudh2290 commented on a change in pull request #16497: Large Vector tests
for DGL Ops Part 2
URL: https://github.com/apache/incubator-mxnet/pull/16497#discussion_r335760962
##
File path: tests/nightly/test_large_vector.py
##
@@ -708,6 +708,111 @@ def test_full():
anirudh2290 commented on a change in pull request #16497: Large Vector tests
for DGL Ops Part 2
URL: https://github.com/apache/incubator-mxnet/pull/16497#discussion_r335761187
##
File path: tests/nightly/test_large_vector.py
##
@@ -708,6 +708,111 @@ def test_full():
anirudh2290 commented on a change in pull request #16497: Large Vector tests
for DGL Ops Part 2
URL: https://github.com/apache/incubator-mxnet/pull/16497#discussion_r335761248
##
File path: tests/nightly/test_large_vector.py
##
@@ -708,6 +708,111 @@ def test_full():
xiezhq-hermann opened a new pull request #16513: Bug fix for the input of same
axes of the swapaxes operator
URL: https://github.com/apache/incubator-mxnet/pull/16513
## Description ##
Bug fix for the input of same axes of the swapaxes operator
## Checklist ##
### Essentials
anirudh2290 commented on a change in pull request #16409: added support for
large tensors in Dropout tests to verify support for more operators
URL: https://github.com/apache/incubator-mxnet/pull/16409#discussion_r335759716
##
File path: tests/nightly/test_large_array.py
anirudh2290 commented on a change in pull request #16409: added support for
large tensors in Dropout tests to verify support for more operators
URL: https://github.com/apache/incubator-mxnet/pull/16409#discussion_r335760414
##
File path: tests/nightly/test_large_array.py
wkcn commented on issue #15921: dynamic custom operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-542937840
Hi @szha, this PR provides an approach to create ABI-compatibility custom
operator.
There is some limitation in the existing interface
access2rohit commented on issue #16477: added more tests to verify support for
large vector
URL: https://github.com/apache/incubator-mxnet/pull/16477#issuecomment-542937444
`test_large_vector.test_rounding_ops ... ok
test_large_vector.test_trigonometric_ops ... ok`
access2rohit commented on issue #16409: added support for large tensors in
Dropout tests to verify support for more operators
URL: https://github.com/apache/incubator-mxnet/pull/16409#issuecomment-542936742
`test_large_array.test_rounding_ops ... ok
access2rohit commented on a change in pull request #16477: added more tests to
verify support for large vector
URL: https://github.com/apache/incubator-mxnet/pull/16477#discussion_r335750464
##
File path: tests/nightly/test_large_vector.py
##
@@ -708,6 +708,174 @@ def
access2rohit commented on a change in pull request #16477: added more tests to
verify support for large vector
URL: https://github.com/apache/incubator-mxnet/pull/16477#discussion_r335750464
##
File path: tests/nightly/test_large_vector.py
##
@@ -708,6 +708,174 @@ def
access2rohit commented on a change in pull request #16477: added more tests to
verify support for large vector
URL: https://github.com/apache/incubator-mxnet/pull/16477#discussion_r335750464
##
File path: tests/nightly/test_large_vector.py
##
@@ -708,6 +708,174 @@ def
john-andrilla commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335749991
##
File path: docs/python_docs/python/tutorials/deploy/run-on-aws/cloud.rst
##
@@ -26,80 +26,8 @@ learning
aaronmarkham opened a new pull request #16512: detect number of procs during
sphinx build
URL: https://github.com/apache/incubator-mxnet/pull/16512
## Description ##
I ran into a build issue when making the python docs on an instance with 16
processors. The makefile for sphinx was
john-andrilla commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335746700
##
File path: docs/python_docs/python/tutorials/deploy/export/onnx.md
##
@@ -28,7 +28,7 @@ In this tutorial, we
john-andrilla commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335746233
##
File path: docs/python_docs/python/tutorials/deploy/export/onnx.md
##
@@ -28,7 +28,7 @@ In this tutorial, we
anirudh2290 commented on a change in pull request #16477: added more tests to
verify support for large vector
URL: https://github.com/apache/incubator-mxnet/pull/16477#discussion_r335743889
##
File path: tests/nightly/test_large_vector.py
##
@@ -708,6 +708,174 @@ def
john-andrilla commented on a change in pull request #16504: [DOC] Fix numpy op
doc
URL: https://github.com/apache/incubator-mxnet/pull/16504#discussion_r335729934
##
File path: python/mxnet/_numpy_op_doc.py
##
@@ -748,3 +748,72 @@ def _np_moveaxis(a, source,
ChaiBapchya commented on a change in pull request #16476: added large tensor
support for add_n and tests for more ops
URL: https://github.com/apache/incubator-mxnet/pull/16476#discussion_r335687206
##
File path: tests/nightly/test_large_array.py
##
@@ -1199,6 +1200,57 @@
ChaiBapchya commented on a change in pull request #16476: added large tensor
support for add_n and tests for more ops
URL: https://github.com/apache/incubator-mxnet/pull/16476#discussion_r335686620
##
File path: tests/nightly/test_large_vector.py
##
@@ -708,6 +709,57 @@
ChaiBapchya commented on issue #16497: Large Vector tests for DGL Ops Part 2
URL: https://github.com/apache/incubator-mxnet/pull/16497#issuecomment-542872055
@access2rohit @anirudh2290 @zheng-da PR ready for review
This is an
ChaiBapchya edited a comment on issue #16497: Large Vector tests for DGL Ops
Part 2
URL: https://github.com/apache/incubator-mxnet/pull/16497#issuecomment-542556096
```
test_large_array.test_gluon_embedding ... ok
test_large_array.test_ndarray_zeros ... ok
aaronmarkham commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335668894
##
File path: docs/python_docs/python/tutorials/packages/gluon/loss/loss.md
##
@@ -19,8 +19,8 @@
Loss functions
aaronmarkham commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335666384
##
File path: docs/python_docs/python/tutorials/packages/gluon/blocks/hybridize.md
##
@@ -294,7 +294,7 @@ def
aaronmarkham commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335665652
##
File path: docs/python_docs/python/tutorials/packages/gluon/blocks/hybridize.md
##
@@ -277,10 +277,10 @@ def
aaronmarkham commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335665652
##
File path: docs/python_docs/python/tutorials/packages/gluon/blocks/hybridize.md
##
@@ -277,10 +277,10 @@ def
aaronmarkham commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335664581
##
File path: docs/python_docs/python/tutorials/packages/autograd/index.md
##
@@ -159,7 +159,7 @@
TEChopra1000 commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335607986
##
File path: docs/python_docs/python/tutorials/packages/gluon/blocks/hybridize.md
##
@@ -294,7 +294,7 @@ def
TEChopra1000 commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335607069
##
File path: docs/python_docs/python/tutorials/packages/gluon/blocks/hybridize.md
##
@@ -277,10 +277,10 @@ def
TEChopra1000 commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335618722
##
File path: docs/python_docs/python/tutorials/packages/gluon/loss/loss.md
##
@@ -19,8 +19,8 @@
Loss functions
TEChopra1000 commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335588156
##
File path: docs/python_docs/python/tutorials/deploy/run-on-aws/index.rst
##
@@ -42,7 +42,7 @@ The following
TEChopra1000 commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335619227
##
File path: docs/python_docs/python/tutorials/packages/gluon/loss/loss.md
##
@@ -19,8 +19,8 @@
Loss functions
TEChopra1000 commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335616730
##
File path: docs/python_docs/python/tutorials/packages/gluon/loss/loss.md
##
@@ -19,8 +19,8 @@
Loss functions
TEChopra1000 commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335597022
##
File path: docs/python_docs/python/tutorials/packages/autograd/index.md
##
@@ -196,7 +196,7 @@ print(x.grad)
TEChopra1000 commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335593746
##
File path: docs/python_docs/python/tutorials/packages/autograd/index.md
##
@@ -159,7 +159,7 @@
TEChopra1000 commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335613010
##
File path: docs/python_docs/python/tutorials/packages/gluon/blocks/nn.md
##
@@ -298,16 +298,16 @@ After all, we
TEChopra1000 commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335621646
##
File path:
docs/python_docs/python/tutorials/packages/ndarray/gotchas_numpy_in_mxnet.md
##
@@ -22,29 +22,29 @@
access2rohit commented on issue #14263: MXNet Master build for CUDA with
DEBUG=1 failing
URL:
https://github.com/apache/incubator-mxnet/issues/14263#issuecomment-542811705
@hzfan let me try that. Thanks for the suggestion!
access2rohit commented on a change in pull request #16409: added support for
large tensors in Dropout tests to verify support for more operators
URL: https://github.com/apache/incubator-mxnet/pull/16409#discussion_r335612343
##
File path: tests/nightly/test_large_array.py
access2rohit commented on a change in pull request #16409: added support for
large tensors in Dropout tests to verify support for more operators
URL: https://github.com/apache/incubator-mxnet/pull/16409#discussion_r335612343
##
File path: tests/nightly/test_large_array.py
ptrendx commented on issue #16398: Aggregated adamw update
URL: https://github.com/apache/incubator-mxnet/pull/16398#issuecomment-542786721
Generally I would opt for cleaning the optimizers so that only the `multi_`
versions of the optimizers code exist (so use the same code for both multi
610v4nn1 opened a new issue #16511: Returned value incorrectly described in the
documentation
URL: https://github.com/apache/incubator-mxnet/issues/16511
https://github.com/apache/incubator-mxnet/blob/c2bbde75c36be2fef4a5e6d941cc01f5aabd9fd2/src/operator/tensor/ordering_op.cc#L38
mxnet-label-bot commented on issue #16511: Returned value incorrectly described
in the documentation
URL:
https://github.com/apache/incubator-mxnet/issues/16511#issuecomment-542781776
Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest some
aaronmarkham commented on issue #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#issuecomment-542765096
@IvyBazan @TEChopra1000 - can you review?
This is an automated message from the
vsuhasm edited a comment on issue #13054: Multiple trainers within a single
worker using a distributed KVStore
URL:
https://github.com/apache/incubator-mxnet/issues/13054#issuecomment-541481202
Hi @szha @frankfliu, is there any way to get around this? I have 2 sets of
parameters in the
aaronmarkham commented on issue #16450: Add test pipeline for USE_TVM_OP=OFF on
Unix
URL: https://github.com/apache/incubator-mxnet/pull/16450#issuecomment-542722599
> dist-kvstore tests GPU failure has shown up in quite a few PRs recently.
Wonder what the root cause is.
I noticed
aaronmarkham commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335489138
##
File path:
docs/python_docs/python/tutorials/packages/onnx/inference_on_onnx_model.md
##
@@ -29,7 +29,7 @@ In
aaronmarkham commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335488905
##
File path: docs/python_docs/python/tutorials/packages/onnx/fine_tuning_gluon.md
##
@@ -31,7 +31,7 @@ In this
aaronmarkham commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335488509
##
File path: docs/python_docs/python/tutorials/packages/onnx/super_resolution.md
##
@@ -24,7 +24,7 @@ In this
aaronmarkham commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335488231
##
File path: docs/static_site/src/pages/api/faq/distributed_training.md
##
@@ -91,7 +91,7 @@ In the case of
aaronmarkham commented on a change in pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500#discussion_r335487459
##
File path: docs/python_docs/python/tutorials/deploy/export/onnx.md
##
@@ -28,7 +28,7 @@ In this tutorial, we
aaronmarkham opened a new issue #16510: control flow tutorial is missing
URL: https://github.com/apache/incubator-mxnet/issues/16510
There are several references to it but it isn't in the docs anymore.
This is an automated
1 - 100 of 182 matches
Mail list logo