cjolivier01 removed a comment on issue #11417: libomp.so dependency (need REAL
fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-554603619
I don’t see any indication that it’s related to that issue, which is the
case for most of the accusations — “any problem?
cjolivier01 removed a comment on issue #11417: libomp.so dependency (need REAL
fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-554603163
Your opinion is noted again.
This is an automated
marcoabreu commented on issue #11417: libomp.so dependency (need REAL fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-554611263
In accordance with the ppmc decision, I have cleaned up this conversation.
larroy removed a comment on issue #11417: libomp.so dependency (need REAL fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-554503289
It has been discussed again and again, the best solution is to remove
additional openmp libraries which break things when mixed
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new b820ba2 Bump the
This is an automated email from the ASF dual-hosted git repository.
reminisce pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 4a27b5c [Fix] Add ctx to the original ndarray and revise the usage of
context to ctx (#16819)
add
reminisce merged pull request #16827: Refactor NumPy-compatible elemwise
broadcast operators
URL: https://github.com/apache/incubator-mxnet/pull/16827
This is an automated message from the Apache Git Service.
To respond to
hgt312 commented on a change in pull request #16829: [Numpy][Operator] 'where'
Implementation in MXNet
URL: https://github.com/apache/incubator-mxnet/pull/16829#discussion_r347077419
##
File path: src/operator/numpy/np_where_op-inl.h
##
@@ -0,0 +1,266 @@
+/*
+ * Licensed
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch benchmark
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 35839bc Make mrcnn_mask_target arg mask_size a 2d tuple (#16567)
add bc6fc14 fixing base lamb
DickJC123 opened a new pull request #16838: USE_NVRTC -> ENABLE_CUDA_RTC to fix
maven build. Add compile-guard to fusion.
URL: https://github.com/apache/incubator-mxnet/pull/16838
This PR fixes the maven build issue
https://github.com/apache/incubator-mxnet/issues/16765 reported by
cjolivier01 commented on issue #11417: libomp.so dependency (need REAL fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-554603619
I don’t see any indication that it’s related to that issue, which is the
case for most of the accusations — “any problem? blame
cjolivier01 commented on issue #11417: libomp.so dependency (need REAL fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-554603163
Your opinion is noted again.
This is an automated message from
wuxun-zhang commented on issue #16833: [Numpy] expands_dims cannot suport scalar
URL:
https://github.com/apache/incubator-mxnet/issues/16833#issuecomment-554602407
@stu1130 PR#16837 is filed. Please help check if it fix this issue. Thanks.
wuxun-zhang opened a new pull request #16837: [MKLDNN] Fix expand_dims fall back
URL: https://github.com/apache/incubator-mxnet/pull/16837
## Description ##
Should fix [16833](https://github.com/apache/incubator-mxnet/issues/16833)
@pengzhao-intel @ZhennanQin @TaoLv @reminisce
haojin2 commented on issue #16822: Trouble building Mxnet 1.6.x
URL:
https://github.com/apache/incubator-mxnet/issues/16822#issuecomment-554601316
Firstly, I don't think you're doing it according to the recommended way:
wuxun-zhang commented on issue #16833: [Numpy] expands_dims cannot suport scalar
URL:
https://github.com/apache/incubator-mxnet/issues/16833#issuecomment-554600286
@stu1130 @reminisce Thanks for reporting this issue. I can reproduce this
from my side. I just figured out the root cause,
haojin2 commented on issue #15229:
test_tensorrt_resnet18.test_tensorrt_resnet18_feature_vect numerical error on
T4 gpu
URL:
https://github.com/apache/incubator-mxnet/issues/15229#issuecomment-55459
Also here:
ptrendx opened a new pull request #16836: Fix InferAttr/InferShapeAttr not
calling inference for all nodes in a graph
URL: https://github.com/apache/incubator-mxnet/pull/16836
## Description ##
Currently type/shape inference of the operator is only called if the values
of input and
larroy opened a new pull request #16835: Fix test_gluon.py:test_sync_batchnorm
when number of GPUS > 4
URL: https://github.com/apache/incubator-mxnet/pull/16835
Port of https://github.com/apache/incubator-mxnet/pull/16834
larroy opened a new pull request #16834: Fix test_gluon.py:test_sync_batchnorm
when number of GPUS > 4
URL: https://github.com/apache/incubator-mxnet/pull/16834
## Description ##
This test assumes the number of batches can be divided by the number of
gpus, and it generates
ptrendx commented on a change in pull request #16817: Fix InferType logic - add
backward inference for some ops
URL: https://github.com/apache/incubator-mxnet/pull/16817#discussion_r347069460
##
File path: src/operator/contrib/deformable_convolution-inl.h
##
@@ -453,18
This is an automated email from the ASF dual-hosted git repository.
ptrendx pushed a commit to branch v1.6.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/v1.6.x by this push:
new 9abb151 Backport to 1.6 (#16773,
ptrendx merged pull request #16832: Backport to 1.6 (#16773, #16781, #16783,
#16716, #16699, #16728, #16769, #16792)
URL: https://github.com/apache/incubator-mxnet/pull/16832
This is an automated message from the Apache Git
anirudh2290 commented on a change in pull request #16817: Fix InferType logic -
add backward inference for some ops
URL: https://github.com/apache/incubator-mxnet/pull/16817#discussion_r347066890
##
File path: src/operator/contrib/deformable_convolution-inl.h
##
@@
ptrendx commented on a change in pull request #16817: Fix InferType logic - add
backward inference for some ops
URL: https://github.com/apache/incubator-mxnet/pull/16817#discussion_r347066250
##
File path: src/operator/contrib/deformable_convolution-inl.h
##
@@ -453,18
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 98c0ffc Bump the
reminisce commented on a change in pull request #16825: boolean indexing same
shape
URL: https://github.com/apache/incubator-mxnet/pull/16825#discussion_r347054090
##
File path: tests/python/unittest/test_numpy_op_boolean_indexing.py
##
@@ -0,0 +1,67 @@
+# Licensed to the
haojin2 commented on a change in pull request #16786: Add OP diag [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16786#discussion_r347052967
##
File path: python/mxnet/_numpy_op_doc.py
##
@@ -1087,3 +1087,39 @@ def _npx_reshape(a, newshape, reverse=False,
haojin2 commented on a change in pull request #16786: Add OP diag [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16786#discussion_r347052885
##
File path: tests/python/unittest/test_numpy_interoperability.py
##
@@ -57,6 +57,36 @@ def get_workloads(name):
jonatan1626 commented on issue #16822: Trouble building Mxnet 1.6.x
URL:
https://github.com/apache/incubator-mxnet/issues/16822#issuecomment-554568762
After doing:
`cp make/pip/pip_linux_cu100mkl.mk config.mk`
`make -j`
I still encounter the same error, should I try this on
nikudyshko edited a comment on issue #16741: Error detecting C++11 and C++14
URL:
https://github.com/apache/incubator-mxnet/issues/16741#issuecomment-554557267
@larroy I guess, I've found a place, where error occurs.
In file ```CMakeLists.txt``` there are following lines:
nikudyshko commented on issue #16741: Error detecting C++11 and C++14
URL:
https://github.com/apache/incubator-mxnet/issues/16741#issuecomment-554557267
I guess, I've found a place, where error occurs.
In file ```CMakeLists.txt``` there are following lines:
reminisce commented on issue #16833: [Numpy] expands_dims cannot suport scalar
URL:
https://github.com/apache/incubator-mxnet/issues/16833#issuecomment-554556279
@PatricZhao
This is an automated message from the Apache Git
reminisce commented on a change in pull request #16829: [Numpy][Operator]
'where' Implementation in MXNet
URL: https://github.com/apache/incubator-mxnet/pull/16829#discussion_r347038122
##
File path: src/operator/numpy/np_where_op-inl.h
##
@@ -0,0 +1,266 @@
+/*
+ *
reminisce commented on a change in pull request #16829: [Numpy][Operator]
'where' Implementation in MXNet
URL: https://github.com/apache/incubator-mxnet/pull/16829#discussion_r347024094
##
File path: src/operator/numpy/np_where_op-inl.h
##
@@ -0,0 +1,266 @@
+/*
+ *
haojin2 commented on issue #16822: Trouble building Mxnet 1.6.x
URL:
https://github.com/apache/incubator-mxnet/issues/16822#issuecomment-554554935
Deep Learning AMI 25.0 has CUDA 10.0 installed, and
`make/pip/pip_linux_cu101mkl` is trying to use cuda 10.1's path:
stu1130 opened a new issue #16833: [Numpy] expands_dims cannot suport scalar
URL: https://github.com/apache/incubator-mxnet/issues/16833
build from source with codebase as of today
build flag:
```
cmake -GNinja -DUSE_CUDA=OFF -DBLAS=apple -DUSE_OPENCV=ON
ptrendx opened a new pull request #16832: Backport to 1.6 (#16773, #16781,
#16783, #16716, #16699, #16728, #16769, #16792)
URL: https://github.com/apache/incubator-mxnet/pull/16832
## Description ##
Backports PRs #16773, #16781, #16783, #16716, #16699, #16728, #16769, #16792
to v1.6.x
reminisce commented on a change in pull request #16829: [Numpy][Operator]
'where' Implementation in MXNet
URL: https://github.com/apache/incubator-mxnet/pull/16829#discussion_r347024094
##
File path: src/operator/numpy/np_where_op-inl.h
##
@@ -0,0 +1,266 @@
+/*
+ *
reminisce commented on a change in pull request #16829: [Numpy][Operator]
'where' Implementation in MXNet
URL: https://github.com/apache/incubator-mxnet/pull/16829#discussion_r347023110
##
File path: src/operator/numpy/np_where_op-inl.h
##
@@ -0,0 +1,266 @@
+/*
+ *
This is an automated email from the ASF dual-hosted git repository.
ptrendx pushed a commit to branch v1.6.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/v1.6.x by this push:
new 867c98d Fix SliceChannel Type
reminisce commented on a change in pull request #16829: [Numpy][Operator]
'where' Implementation in MXNet
URL: https://github.com/apache/incubator-mxnet/pull/16829#discussion_r347022548
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -5308,3 +5308,67 @@ def
ptrendx merged pull request #16797: Fix SliceChannel Type inference (#16748)
URL: https://github.com/apache/incubator-mxnet/pull/16797
This is an automated message from the Apache Git Service.
To respond to the message,
anirudh2290 commented on a change in pull request #16829: [Numpy][Operator]
'where' Implementation in MXNet
URL: https://github.com/apache/incubator-mxnet/pull/16829#discussion_r347021917
##
File path: src/operator/numpy/np_where_op-inl.h
##
@@ -0,0 +1,266 @@
+/*
+ *
nikudyshko commented on issue #16741: Error detecting C++11 and C++14
URL:
https://github.com/apache/incubator-mxnet/issues/16741#issuecomment-554537652
I've added ```message()```'es after ```if(MSVC)``` and ```else(MSVC)```.
According to output, cmake goes through ```if```-branch (and
ptrendx commented on issue #16798: Add unoptimized symbol to executor for
sharing
URL: https://github.com/apache/incubator-mxnet/pull/16798#issuecomment-554534259
@roywei - Ping, could you test that it fixes your issue?
This
haojin2 commented on a change in pull request #16827: Refactor NumPy-compatible
elemwise broadcast operators
URL: https://github.com/apache/incubator-mxnet/pull/16827#discussion_r347016247
##
File path: src/operator/tensor/elemwise_binary_op.h
##
@@ -474,6 +474,31 @@
reminisce commented on a change in pull request #16827: Refactor
NumPy-compatible elemwise broadcast operators
URL: https://github.com/apache/incubator-mxnet/pull/16827#discussion_r346998428
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -4291,6 +4291,44 @@ def
reminisce commented on a change in pull request #16827: Refactor
NumPy-compatible elemwise broadcast operators
URL: https://github.com/apache/incubator-mxnet/pull/16827#discussion_r347012859
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -1650,6 +1650,7 @@ def
reminisce commented on a change in pull request #16827: Refactor
NumPy-compatible elemwise broadcast operators
URL: https://github.com/apache/incubator-mxnet/pull/16827#discussion_r347013187
##
File path: src/operator/tensor/elemwise_binary_op.h
##
@@ -474,6 +474,31 @@
haojin2 commented on a change in pull request #16786: Add OP diag [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16786#discussion_r347012799
##
File path: src/operator/numpy/np_matrix_op-inl.h
##
@@ -945,6 +950,206 @@ void NumpyConcatenateBackward(const
haojin2 commented on a change in pull request #16786: Add OP diag [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16786#discussion_r347012488
##
File path: src/operator/numpy/np_matrix_op-inl.h
##
@@ -945,6 +950,206 @@ void NumpyConcatenateBackward(const
haojin2 commented on a change in pull request #16786: Add OP diag [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16786#discussion_r347012366
##
File path: tests/python/unittest/test_numpy_interoperability.py
##
@@ -89,6 +119,7 @@ def
DickJC123 edited a comment on issue #16765: FusedOp Failing Static Linked Build
URL:
https://github.com/apache/incubator-mxnet/issues/16765#issuecomment-554174049
Finally figured out what's going on here. The build of bin/im2rec via ld
(as driven by g++) is failing because LDFLAGS is
perdasilva commented on issue #16787: [CD] PyPI pipeline fix
URL: https://github.com/apache/incubator-mxnet/pull/16787#issuecomment-554519129
Hmm...but maybe this is just number of arguments...not sure if it checks
type annotations etc ..
perdasilva commented on issue #16787: [CD] PyPI pipeline fix
URL: https://github.com/apache/incubator-mxnet/pull/16787#issuecomment-554518803
Regarding the call signature check, I'm really not sure. I've only managed
to find [this](https://www.logilab.org/ticket/5561), which suggests that
perdasilva edited a comment on issue #16787: [CD] PyPI pipeline fix
URL: https://github.com/apache/incubator-mxnet/pull/16787#issuecomment-554515789
Hey @marcoabreu
I didn't quite get it. I've added some changes to disable this rule for that
specific line.
If I understood you
larroy commented on issue #16741: Error detecting C++11 and C++14
URL:
https://github.com/apache/incubator-mxnet/issues/16741#issuecomment-554516763
those flags with dash are in the else branch of the MSVC conditional, how
can they be added to your build flags? Can you add a print in the
perdasilva edited a comment on issue #16787: [CD] PyPI pipeline fix
URL: https://github.com/apache/incubator-mxnet/pull/16787#issuecomment-554515789
Hey @marcoabreu
I didn't quite get it. I've added some changes to disable this rule for that
specific line.
If I understood you
perdasilva commented on issue #16787: [CD] PyPI pipeline fix
URL: https://github.com/apache/incubator-mxnet/pull/16787#issuecomment-554515789
Hey @marcoabreu
I didn't quite get it. I've added some changes to disable this rule for that
specific line.
If I understood you
larroy edited a comment on issue #11417: libomp.so dependency (need REAL fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-554503289
It has been discussed again and again, the best solution is to remove
additional openmp libraries which break things when mixed
larroy commented on issue #11417: libomp.so dependency (need REAL fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-554503289
It has been discussed again and again, the best solution is to remove
additional openmp libraries which break things when mixed up. Use
larroy edited a comment on issue #16529: How can i compile on the jetson tx2
with tensorrt?
URL:
https://github.com/apache/incubator-mxnet/issues/16529#issuecomment-554500702
@KellenSunderland is an expert on this part. Are your submodules up to date?
There's some tensorrt tests in the
larroy edited a comment on issue #16529: How can i compile on the jetson tx2
with tensorrt?
URL:
https://github.com/apache/incubator-mxnet/issues/16529#issuecomment-554500702
@KellenSunderland is an expert on this part. Are your submodules up to date?
larroy commented on issue #16529: How can i compile on the jetson tx2 with
tensorrt?
URL:
https://github.com/apache/incubator-mxnet/issues/16529#issuecomment-554500702
@KellenSunderland is an expert on this part.
This is an
access2rohit commented on issue #16822: Trouble building Mxnet 1.6.x
URL:
https://github.com/apache/incubator-mxnet/issues/16822#issuecomment-554494001
@mxnet-label-bot add [Build]
This is an automated message from the
hgt312 commented on issue #16829: [Numpy][Operator] 'where' Implementation in
MXNet
URL: https://github.com/apache/incubator-mxnet/pull/16829#issuecomment-554489820
@xidulu
1. According to NumPy's doc, use `np.nonzero`, we have it.
> When only condition is provided, this function is
calm0815 edited a comment on issue #13785: compile error:token ""__CUDACC_VER__
is no longer supported.
URL:
https://github.com/apache/incubator-mxnet/issues/13785#issuecomment-554489542
@zachgk [Here](https://github.com/msracver/FCIS) is the reason behind using
commit `998378a`, maybe
calm0815 commented on issue #13785: compile error:token ""__CUDACC_VER__ is no
longer supported.
URL:
https://github.com/apache/incubator-mxnet/issues/13785#issuecomment-554489542
@zachgk [Here](https://github.com/msracver/FCIS) is the reason behind using
commit `998378a`
access2rohit edited a comment on issue #16822: Trouble building Mxnet 1.6.x
URL:
https://github.com/apache/incubator-mxnet/issues/16822#issuecomment-554485248
@ptrendx this potentially means that build for pypi cu101mkl wheel for
v1.6.0 might fail too. This needs to be investigated. Do
access2rohit commented on issue #16822: Trouble building Mxnet 1.6.x
URL:
https://github.com/apache/incubator-mxnet/issues/16822#issuecomment-554485248
@ptrendx this will fail the pypi wheel build for v1.6.0. This needs to be
investigated. Do you know anyone who has knowledge of mshadow
access2rohit edited a comment on issue #16822: Trouble building Mxnet 1.6.x
URL:
https://github.com/apache/incubator-mxnet/issues/16822#issuecomment-554485248
@ptrendx this will fail the pypi cu101mkl wheel build for v1.6.0. This needs
to be investigated. Do you know anyone who has
marcoabreu commented on issue #16787: [CD] PyPI pipeline fix
URL: https://github.com/apache/incubator-mxnet/pull/16787#issuecomment-554481606
Per, would it be okay with you to quickly run pylint2 manually on your
computer? If everything is okay, I'm happy to merge your PR.
I'm not
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new a6e7784 Bump the
xidulu edited a comment on issue #16829: [Numpy][Operator] 'where'
Implementation in MXNet
URL: https://github.com/apache/incubator-mxnet/pull/16829#issuecomment-554435551
Two question:
1. Numpy supports the following usage:
```
>>> import numpy as np
>>> a =
xidulu commented on issue #16829: [Numpy][Operator] 'where' Implementation in
MXNet
URL: https://github.com/apache/incubator-mxnet/pull/16829#issuecomment-554435551
Two question:
1. Numpy supports the following usage:
```
>>> import numpy as np
>>> a =
hgt312 commented on issue #16829: [Numpy][Operator] 'where' Implementation in
MXNet
URL: https://github.com/apache/incubator-mxnet/pull/16829#issuecomment-554406974
@mxnet-label-bot add [Numpy]
This is an automated message
TaoLv commented on issue #16823: [WIP] Upgrade MKL-DNN dependency to v1.1
URL: https://github.com/apache/incubator-mxnet/pull/16823#issuecomment-554386021
@apeforest @yuxihu It could be great if you can try this PR with
Horovod-MXNet build as the header names are changed. Thanks.
leezu opened a new issue #16831: [CI] Python2: CPU - hangs after
test_create_np_param
URL: https://github.com/apache/incubator-mxnet/issues/16831
```
test_numpy_gluon.test_create_np_param ... NumPy-shape semantics has been
activated in your code. This is required for creating
artor1os commented on a change in pull request #16786: Add OP diag [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16786#discussion_r346838445
##
File path: src/operator/numpy/np_matrix_op-inl.h
##
@@ -945,6 +950,206 @@ void NumpyConcatenateBackward(const
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new a11c705 Bump the
sxjscience commented on issue #7410: mxnet random seed should not be fixed by
default
URL:
https://github.com/apache/incubator-mxnet/issues/7410#issuecomment-554332636
I suggest fixing the seeding issue in 1.6.1. Currently, MXNet is not
consistently even if we fix the seed:
leezu opened a new issue #16830: CI error in unix gpu
test_quantization_gpu.test_quantized_conv
URL: https://github.com/apache/incubator-mxnet/issues/16830
more details
hgt312 opened a new pull request #16829: [Numpy][Operator] 'where'
implementation in MXNet
URL: https://github.com/apache/incubator-mxnet/pull/16829
https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html?highlight=where#numpy.where
perdasilva commented on issue #16787: [CD] PyPI pipeline fix
URL: https://github.com/apache/incubator-mxnet/pull/16787#issuecomment-554307446
Made it python2 compatible. Successful example on
leezu commented on issue #16828: Missing imports in
python/mxnet/contrib/__init__.py
URL:
https://github.com/apache/incubator-mxnet/issues/16828#issuecomment-554285172
@ptrendx is there any special reason for not adding the import to
`python/mxnet/contrib/__init__.py` besides above
leezu opened a new issue #16828: Missing imports in
python/mxnet/contrib/__init__.py
URL: https://github.com/apache/incubator-mxnet/issues/16828
## Description
`python/mxnet/contrib/__init__.py` does not `import amp`.
Thus users can't access `mxnet.contrib.amp` without running an
leezu commented on issue #16826: Add missing imports in
python/mxnet/contrib/__init__.py
URL: https://github.com/apache/incubator-mxnet/pull/16826#issuecomment-554282718
This is blocked by some dependencies between contrib module and ndarray
module, as well as hacks in the ndarray module
leezu closed pull request #16826: Add missing imports in
python/mxnet/contrib/__init__.py
URL: https://github.com/apache/incubator-mxnet/pull/16826
This is an automated message from the Apache Git Service.
To respond to
WardvanderVelden closed issue #16812: Error when generating mxnet-cpp/op.h
URL: https://github.com/apache/incubator-mxnet/issues/16812
This is an automated message from the Apache Git Service.
To respond to the message,
hgt312 commented on issue #16818: [Numpy][TVM] TVM reduce added, support
initial value
URL: https://github.com/apache/incubator-mxnet/pull/16818#issuecomment-554274898
`tvm.min` is slightly different from `np.min`, because it do not handle NaN.
Also `max`, `maximum`..
reminisce commented on a change in pull request #16788: [Numpy][TVM]Add numpy
operator 'polyval' based on tvm
URL: https://github.com/apache/incubator-mxnet/pull/16788#discussion_r346720095
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -345,6 +345,59 @@ def
reminisce commented on a change in pull request #16788: [Numpy][TVM]Add numpy
operator 'polyval' based on tvm
URL: https://github.com/apache/incubator-mxnet/pull/16788#discussion_r346720326
##
File path: contrib/tvmop/core/polyval.py
##
@@ -0,0 +1,121 @@
+# Licensed to
reminisce commented on a change in pull request #16788: [Numpy][TVM]Add numpy
operator 'polyval' based on tvm
URL: https://github.com/apache/incubator-mxnet/pull/16788#discussion_r346709823
##
File path: contrib/tvmop/utils.py
##
@@ -20,6 +20,7 @@
AllTypes =
reminisce commented on a change in pull request #16788: [Numpy][TVM]Add numpy
operator 'polyval' based on tvm
URL: https://github.com/apache/incubator-mxnet/pull/16788#discussion_r346703040
##
File path: contrib/tvmop/core/polyval.py
##
@@ -0,0 +1,121 @@
+# Licensed to
haojin2 opened a new pull request #16827: Refactor NumPy-compatible elemwise
broadcast operators
URL: https://github.com/apache/incubator-mxnet/pull/16827
## Description ##
Adding `bitwise_xor` on the side.
## Checklist ##
### Essentials ###
Please feel free to remove
leezu opened a new pull request #16826: Add missing imports in
python/mxnet/contrib/__init__.py
URL: https://github.com/apache/incubator-mxnet/pull/16826
## Description ##
Add missing imports in python/mxnet/contrib/__init__.py
## Comments ##
Without this change, users need
TsingWei commented on issue #16529: How can i compile on the jetson tx2 with
tensorrt?
URL:
https://github.com/apache/incubator-mxnet/issues/16529#issuecomment-554267706
@larroy I tried build with tensorrt support by setting `USE_TENSORRT` to 1.
At first it cannot find some libs of
Tommliu commented on a change in pull request #16786: Add OP diag [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16786#discussion_r346704938
##
File path: src/operator/numpy/np_matrix_op-inl.h
##
@@ -945,6 +950,207 @@ void NumpyConcatenateBackward(const
artor1os commented on a change in pull request #16786: Add OP diag [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16786#discussion_r346700885
##
File path: src/operator/numpy/np_matrix_op-inl.h
##
@@ -945,6 +950,207 @@ void NumpyConcatenateBackward(const
1 - 100 of 103 matches
Mail list logo