sergei-grechanik commented on a change in pull request #5171: [Arith] linear
system and equation solver
URL: https://github.com/apache/incubator-tvm/pull/5171#discussion_r404523680
##
File path: tests/python/unittest/test_arith_solve_linear_system.py
##
@@ -0,0 +1,213 @@
sergei-grechanik commented on a change in pull request #5171: [Arith] linear
system and equation solver
URL: https://github.com/apache/incubator-tvm/pull/5171#discussion_r404523919
##
File path: tests/python/unittest/test_arith_solve_linear_system.py
##
@@ -0,0 +1,213 @@
sergei-grechanik commented on a change in pull request #5171: [Arith] linear
system and equation solver
URL: https://github.com/apache/incubator-tvm/pull/5171#discussion_r404517208
##
File path: tests/python/unittest/test_arith_solve_linear_system.py
##
@@ -14,9 +14,130
sergei-grechanik commented on a change in pull request #5171: [Arith] linear
system and equation solver
URL: https://github.com/apache/incubator-tvm/pull/5171#discussion_r404530562
##
File path: tests/python/unittest/test_arith_solve_linear_system.py
##
@@ -0,0 +1,213 @@
sergei-grechanik commented on a change in pull request #5171: [Arith] linear
system and equation solver
URL: https://github.com/apache/incubator-tvm/pull/5171#discussion_r404526806
##
File path: tests/python/unittest/test_arith_solve_linear_system.py
##
@@ -0,0 +1,213 @@
sergei-grechanik commented on a change in pull request #5171: [Arith] linear
system and equation solver
URL: https://github.com/apache/incubator-tvm/pull/5171#discussion_r404526032
##
File path: src/arith/solve_linear_equation.cc
##
@@ -0,0 +1,480 @@
+/*
+ * Licensed to
anijain2305 edited a comment on issue #5230: Adding support for TFLite
QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-610167373
@FrozenGene If we have C = 1, then depth wise conv becomes normal conv.
There is nothing to accumulate across input
anijain2305 commented on issue #5230: Adding support for TFLite QnnSubtract
operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-610167373
@FrozenGene If we have C = 1, then depth wise conv becomes normal conv.
There is nothing to accumulate across input channels
FrozenGene commented on issue #5252: [RUNTIME] Initial implementation of
Hexagon runtime support
URL: https://github.com/apache/incubator-tvm/pull/5252#issuecomment-610163091
Could you provide the `.idl` file too? Because you have provided
`tvm_hexagon_remote.h`, if we don't have `.idl`
FrozenGene commented on a change in pull request #5252: [RUNTIME] Initial
implementation of Hexagon runtime support
URL: https://github.com/apache/incubator-tvm/pull/5252#discussion_r404519901
##
File path: src/runtime/hexagon/target/hexagon_device_target.cc
##
@@ -0,0
tqchen commented on issue #5256: [onnx] [llvm ]
URL: https://github.com/apache/incubator-tvm/issues/5256#issuecomment-610162643
Please open a new thread on https://discuss.tvm.ai/
This is an automated message from the
FrozenGene commented on a change in pull request #5252: [RUNTIME] Initial
implementation of Hexagon runtime support
URL: https://github.com/apache/incubator-tvm/pull/5252#discussion_r404523734
##
File path: src/runtime/hexagon/target/hexagon_device_target.cc
##
@@ -0,0
tqchen closed issue #5256: [onnx] [llvm ]
URL: https://github.com/apache/incubator-tvm/issues/5256
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
tqchen commented on a change in pull request #4459: [RUNTIME] Implement
TVMDSOOp(TensorFlow custom op) for TVM runtime
URL: https://github.com/apache/incubator-tvm/pull/4459#discussion_r404523726
##
File path: cmake/config.cmake
##
@@ -118,7 +118,7 @@
tqchen commented on a change in pull request #4459: [RUNTIME] Implement
TVMDSOOp(TensorFlow custom op) for TVM runtime
URL: https://github.com/apache/incubator-tvm/pull/4459#discussion_r404520154
##
File path: python/tvm/contrib/tf_op/module.py
##
@@ -0,0 +1,113 @@
+#
tqchen commented on issue #4459: [RUNTIME] Implement TVMDSOOp(TensorFlow custom
op) for TVM runtime
URL: https://github.com/apache/incubator-tvm/pull/4459#issuecomment-610157736
@zhiics @FrozenGene please
tqchen opened a new pull request #5258: [TIR] Fix perf regression of tir
refactor
URL: https://github.com/apache/incubator-tvm/pull/5258
cc @kevinthesun
This is an automated message from the Apache Git Service.
To respond
siju-samuel commented on a change in pull request #5249: [PYTORCH]LayerNorm
support added
URL: https://github.com/apache/incubator-tvm/pull/5249#discussion_r404512746
##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -503,6 +503,34 @@ def _impl(inputs,
siju-samuel opened a new pull request #5257: [Pytorch]layernorm bug fix and
testcase updated
URL: https://github.com/apache/incubator-tvm/pull/5257
- layernorm bug fixed
- layernorm testcase updated
- assert condition missing bug fixed
@masahi please help to review this PR.
siju-samuel commented on a change in pull request #5249: [PYTORCH]LayerNorm
support added
URL: https://github.com/apache/incubator-tvm/pull/5249#discussion_r404512746
##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -503,6 +503,34 @@ def _impl(inputs,
fcqfcq commented on issue #5256: onnx llvm
URL: https://github.com/apache/incubator-tvm/issues/5256#issuecomment-610146623
WARNING:root:Failed to download tophub package for llvm:
download failed due to URLError(TimeoutError(110, 'Connection timed out'),),
retrying, 2 attempts left
fcqfcq opened a new issue #5256: onnx llvm
URL: https://github.com/apache/incubator-tvm/issues/5256
Thanks for participating in the TVM community! We use https://discuss.tvm.ai
for any general usage questions and discussions. The issue tracker is used for
actionable items such as
fcqfcq opened a new issue #5255: pytorch ['aten::leaky_relu',
'aten::reciprocal', 'aten::repeat']
URL: https://github.com/apache/incubator-tvm/issues/5255
Thanks for participating in the TVM community! We use https://discuss.tvm.ai
for any general usage questions and discussions. The
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 00b2304 [Topi] Breakdown topi.cc into smaller files (#5253)
add 00a8481 [TE] Minor bugfix in
tqchen merged pull request #5254: [TE] Minor bugfix in message_passing.cc
URL: https://github.com/apache/incubator-tvm/pull/5254
This is an automated message from the Apache Git Service.
To respond to the message, please log
tqchen commented on issue #5254: [TE] Minor bugfix in message_passing.cc
URL: https://github.com/apache/incubator-tvm/pull/5254#issuecomment-610131191
Thanks @pratikfegade !
This is an automated message from the Apache Git
hcho3 commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on
test_autotvm_xgboost_model.py
URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-610119249
@leandron Thanks for pointing out which part of TVM test is failing. Not
sure if running in debug
tqchen commented on issue #5240: [CI] Update MxNet to 1.6.0 with MKL
URL: https://github.com/apache/incubator-tvm/pull/5240#issuecomment-610104967
That will further complicates the CI itself. It would be great if we can
explore other ways to validate and stick with the default mxnet
tqchen edited a comment on issue #5240: [CI] Update MxNet to 1.6.0 with MKL
URL: https://github.com/apache/incubator-tvm/pull/5240#issuecomment-610104967
That will further complicates the CI itself. It would be great if we can
explore other ways to validate(e.g. simulated quantize) and
icemelon9 edited a comment on issue #5248: [FRONTEND] tensorflow GatherNd
function error
URL: https://github.com/apache/incubator-tvm/issues/5248#issuecomment-610100600
Yes, I've open a topic in the discussion forum. Let's discuss there.
https://discuss.tvm.ai/t/gather-nd-semantics/6243
icemelon9 commented on issue #5248: [FRONTEND] tensorflow GatherNd function
error
URL: https://github.com/apache/incubator-tvm/issues/5248#issuecomment-610100600
Yes, let's open a discussion in the forum about this.
https://discuss.tvm.ai/t/gather-nd-semantics/6243
shoubhik commented on issue #5240: [CI] Update MxNet to 1.6.0 with MKL
URL: https://github.com/apache/incubator-tvm/pull/5240#issuecomment-610063790
Can't we have another step in CI that executes Mxnet Qnn models on intel
CPUs only?
masahi commented on issue #5243: [Frontend][TensorFlow]Improve TensorFlow
Static Shape Tensor Array
URL: https://github.com/apache/incubator-tvm/pull/5243#issuecomment-610052206
Great I got the following working. Also confirmed `get_tensor_array_shape`
worked. Happy now :) Thank you very
This is an automated email from the ASF dual-hosted git repository.
haichen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 0cc2661 [PYTORCH]LayerNorm support added (#5249)
add 00b2304 [Topi] Breakdown topi.cc into smaller
icemelon9 merged pull request #5253: [Topi] Breakdown topi.cc into smaller files
URL: https://github.com/apache/incubator-tvm/pull/5253
This is an automated message from the Apache Git Service.
To respond to the message,
masahi commented on a change in pull request #5249: [PYTORCH]LayerNorm support
added
URL: https://github.com/apache/incubator-tvm/pull/5249#discussion_r404391408
##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -503,6 +503,34 @@ def _impl(inputs, input_types):
kevinthesun commented on issue #5243: [Frontend][TensorFlow]Improve TensorFlow
Static Shape Tensor Array
URL: https://github.com/apache/incubator-tvm/pull/5243#issuecomment-610039127
https://github.com/apache/incubator-tvm/pull/5243/files#diff-eae8ecf976e0031823eeae454466f964R903
Take
masahi commented on issue #5243: [Frontend][TensorFlow]Improve TensorFlow
Static Shape Tensor Array
URL: https://github.com/apache/incubator-tvm/pull/5243#issuecomment-610032209
hmm I tried this:
```Py
def _tensor_array_stack(prelude):
def _impl(inputs, input_types):
kevinthesun commented on issue #5243: [Frontend][TensorFlow]Improve TensorFlow
Static Shape Tensor Array
URL: https://github.com/apache/incubator-tvm/pull/5243#issuecomment-610029111
@masahi The shape passed to ```get_var_static``` is for identification. For
tensor_get_data, it is just
tqchen commented on issue #5190: Create loops according to storage scope and
thread hierarchies
URL: https://github.com/apache/incubator-tvm/pull/5190#issuecomment-610020931
@Hzfengsy @vinx13 @ZihengJiang can you help to take a look?
masahi merged pull request #5249: [PYTORCH]LayerNorm support added
URL: https://github.com/apache/incubator-tvm/pull/5249
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
This is an automated email from the ASF dual-hosted git repository.
masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 5e50f47 [RUNTIME] Enable auto conversion from str to runtime::String
in PackedFunc, move dtype related
masahi commented on issue #5249: [PYTORCH]LayerNorm support added
URL: https://github.com/apache/incubator-tvm/pull/5249#issuecomment-610021128
Thanks @siju-samuel. I've just met a model with layer norm yesterday, this
is immediately useful to me :)
tqchen commented on issue #5240: [CI] Update MxNet to 1.6.0 with MKL
URL: https://github.com/apache/incubator-tvm/pull/5240#issuecomment-610019209
Yes, I think we should use `mxnet==1.6.0` instead. Unfortunately it does not
give a simple answer to @shoubhik 's problem, if mxnet's quantized
icemelon9 commented on issue #5251: [RUNTIME] Auto conversion from str to
runtime::String in PackedFUnc
URL: https://github.com/apache/incubator-tvm/pull/5251#issuecomment-610017959
Thanks @tqchen @zhiics
This is an
masahi commented on issue #5243: [Frontend][TensorFlow]Improve TensorFlow
Static Shape Tensor Array
URL: https://github.com/apache/incubator-tvm/pull/5243#issuecomment-610016636
> @masahi You can use
tmoreau89 commented on a change in pull request #5252: [RUNTIME] Initial
implementation of Hexagon runtime support
URL: https://github.com/apache/incubator-tvm/pull/5252#discussion_r404326477
##
File path: src/runtime/hexagon/README.md
##
@@ -0,0 +1,76 @@
+
+
+
+
+
+
+
+
icemelon9 commented on issue #5240: [CI] Update MxNet to 1.6.0 with MKL
URL: https://github.com/apache/incubator-tvm/pull/5240#issuecomment-609983357
So in that case, should we use mxnet==1.6.0 instead of mxnet-mkl?
This is
icemelon9 opened a new pull request #5253: [Topi] Breakdown topi.cc into
smaller files
URL: https://github.com/apache/incubator-tvm/pull/5253
Thanks for contributing to TVM! Please refer to guideline
https://tvm.apache.org/docs/contribute/ for useful information and tips. After
the
kparzysz-quic commented on a change in pull request #5252: [RUNTIME] Initial
implementation of Hexagon runtime support
URL: https://github.com/apache/incubator-tvm/pull/5252#discussion_r404307975
##
File path: src/runtime/hexagon/README.md
##
@@ -0,0 +1,76 @@
+
+
+
+
+
+
tqchen commented on a change in pull request #5252: [RUNTIME] Initial
implementation of Hexagon runtime support
URL: https://github.com/apache/incubator-tvm/pull/5252#discussion_r404304102
##
File path: src/runtime/hexagon/README.md
##
@@ -0,0 +1,76 @@
+
+
+
+
+
+
+
+
+
+
wpan11nv commented on a change in pull request #5226: [CODEGEN][CUDA] Fix
vector load
URL: https://github.com/apache/incubator-tvm/pull/5226#discussion_r404304285
##
File path: tests/python/unittest/test_target_codegen_cuda.py
##
@@ -543,6 +543,40 @@ def run_test(dtype):
tqchen commented on a change in pull request #5252: [RUNTIME] Initial
implementation of Hexagon runtime support
URL: https://github.com/apache/incubator-tvm/pull/5252#discussion_r404303430
##
File path: include/tvm/runtime/device_api.h
##
@@ -47,10 +47,10 @@ enum
wpan11nv commented on a change in pull request #5226: [CODEGEN][CUDA] Fix
vector load
URL: https://github.com/apache/incubator-tvm/pull/5226#discussion_r404303393
##
File path: src/target/source/codegen_cuda.cc
##
@@ -796,5 +796,49 @@ void
kparzysz-quic commented on a change in pull request #5252: [RUNTIME] Initial
implementation of Hexagon runtime support
URL: https://github.com/apache/incubator-tvm/pull/5252#discussion_r404295175
##
File path: src/runtime/hexagon/README.md
##
@@ -0,0 +1,76 @@
+
+
+
+
+
+
wpan11nv commented on a change in pull request #5226: [CODEGEN][CUDA] Fix
vector load
URL: https://github.com/apache/incubator-tvm/pull/5226#discussion_r404293617
##
File path: tests/python/unittest/test_target_codegen_cuda.py
##
@@ -543,6 +543,40 @@ def run_test(dtype):
wpan11nv commented on a change in pull request #5226: [CODEGEN][CUDA] Fix
vector load
URL: https://github.com/apache/incubator-tvm/pull/5226#discussion_r404292586
##
File path: src/target/source/codegen_c.cc
##
@@ -955,5 +946,30 @@ void CodeGenC::VisitStmt_(const
tmoreau89 commented on a change in pull request #5252: [RUNTIME] Initial
implementation of Hexagon runtime support
URL: https://github.com/apache/incubator-tvm/pull/5252#discussion_r404291517
##
File path: src/runtime/hexagon/README.md
##
@@ -0,0 +1,76 @@
+
+
+
+
+
+
+
+
tmoreau89 commented on a change in pull request #5252: [RUNTIME] Initial
implementation of Hexagon runtime support
URL: https://github.com/apache/incubator-tvm/pull/5252#discussion_r404289700
##
File path: include/tvm/runtime/device_api.h
##
@@ -47,10 +47,10 @@ enum
tmoreau89 commented on a change in pull request #5252: [RUNTIME] Initial
implementation of Hexagon runtime support
URL: https://github.com/apache/incubator-tvm/pull/5252#discussion_r404289091
##
File path: src/runtime/hexagon/README.md
##
@@ -0,0 +1,76 @@
+
+
+
+
+
+
+
+
wpan11nv commented on a change in pull request #5226: [CODEGEN][CUDA] Fix
vector load
URL: https://github.com/apache/incubator-tvm/pull/5226#discussion_r404291102
##
File path: src/target/source/literal/cuda_half_t.h
##
@@ -291,7 +291,7 @@ static inline __device__
tqchen commented on issue #5252: [RUNTIME] Initial implementation of Hexagon
runtime support
URL: https://github.com/apache/incubator-tvm/pull/5252#issuecomment-609948356
cc @kazum @tmoreau89 @kazum
This is an automated
kparzysz-quic closed pull request #3163: [RUNTIME] Initial implementation of
Hexagon runtime support
URL: https://github.com/apache/incubator-tvm/pull/3163
This is an automated message from the Apache Git Service.
To
kparzysz-quic commented on issue #3163: [RUNTIME] Initial implementation of
Hexagon runtime support
URL: https://github.com/apache/incubator-tvm/pull/3163#issuecomment-609930504
Replaced by https://github.com/apache/incubator-tvm/pull/5252.
kparzysz-quic opened a new pull request #5252: [RUNTIME] Initial implementation
of Hexagon runtime support
URL: https://github.com/apache/incubator-tvm/pull/5252
This is only the TVM runtime. The FastRPC libraries, simulator driver, etc.
will be provided in subsequent commits.
kevinthesun commented on issue #5243: [Frontend][TensorFlow]Improve TensorFlow
Static Shape Tensor Array
URL: https://github.com/apache/incubator-tvm/pull/5243#issuecomment-609926012
> 1. When I append the tensor to tensor array (by concat), I can do infer
shape on the input tensor to get
kevinthesun commented on issue #5243: [Frontend][TensorFlow]Improve TensorFlow
Static Shape Tensor Array
URL: https://github.com/apache/incubator-tvm/pull/5243#issuecomment-609924079
> 2\. The output type of stack is currently
`static_tensor_float32_?_2_4_t[]` in my test. Is there a way
kevinthesun commented on a change in pull request #5243:
[Frontend][TensorFlow]Improve TensorFlow Static Shape Tensor Array
URL: https://github.com/apache/incubator-tvm/pull/5243#discussion_r404255135
##
File path: python/tvm/relay/frontend/common.py
##
@@ -548,6 +558,28
tmoreau89 commented on issue #4272: [VTA] Tutorial on how to deploy and execute
model on device without RPC
URL: https://github.com/apache/incubator-tvm/issues/4272#issuecomment-609910867
@flip1995 if you could start a new page of the doc (see:
tqchen edited a comment on issue #5250: [REFACTOR] StringImm -> String
URL: https://github.com/apache/incubator-tvm/issues/5250#issuecomment-609886066
cc @wweic @zhiics please see if you can take the rest of the items in the
incoming week
tqchen opened a new pull request #5251: [RUNTIME] Auto conversion from str to
runtime::String in PackedFUnc
URL: https://github.com/apache/incubator-tvm/pull/5251
Also moves dtype related handling to data_type.h
This is
tqchen commented on issue #5251: [RUNTIME] Auto conversion from str to
runtime::String in PackedFUnc
URL: https://github.com/apache/incubator-tvm/pull/5251#issuecomment-609907149
cc @zhiics @wweic
This is an automated
tqchen commented on issue #5124: [uTVM][Runtime] Introduce Virtual Memory
Allocator to CRT
URL: https://github.com/apache/incubator-tvm/pull/5124#issuecomment-609895984
LGTM from my side, @liangfu let us also make sure we add the crt test to the
CI. possibly by running some of the quick
chinakook commented on issue #5238: fix to skip node not in graph.
URL: https://github.com/apache/incubator-tvm/pull/5238#issuecomment-609889153
I'm sure some network will be fail to hybridize after this
https://github.com/apache/incubator-mxnet/pull/15167 PR. So we can add
tqchen commented on issue #5238: fix to skip node not in graph.
URL: https://github.com/apache/incubator-tvm/pull/5238#issuecomment-609888261
i see, perhaps you also want to confirm with the mxnet devs to make sure it
is the intended behavior cc @szha
tqchen edited a comment on issue #5238: fix to skip node not in graph.
URL: https://github.com/apache/incubator-tvm/pull/5238#issuecomment-609888261
i see, perhaps you also want to confirm with the mxnet community as well to
make sure it is the intended behavior cc @szha
tqchen opened a new issue #5250: [REFACTOR] StringImm -> String
URL: https://github.com/apache/incubator-tvm/issues/5250
We have introduced the runtime::String to the tvm runtime. This new object
can be used to replace most of the StringImm usages, except for the one that
uses the IR.
tqchen commented on issue #5250: [REFACTOR] StringImm -> String
URL: https://github.com/apache/incubator-tvm/issues/5250#issuecomment-609886066
cc @wweic @zhiics
This is an automated message from the Apache Git Service.
To
chinakook commented on issue #5238: fix to skip node not in graph.
URL: https://github.com/apache/incubator-tvm/pull/5238#issuecomment-609885638
@tqchen As I'v just tested, It's a op named ```_FusedOp``` that cannot be
CHECK success.
siju-samuel opened a new pull request #5249: [PYTORCH]LayerNorm support added
URL: https://github.com/apache/incubator-tvm/pull/5249
@masahi please help to review this PR. TIA
Thanks for contributing to TVM! Please refer to guideline
https://tvm.apache.org/docs/contribute/ for
tqchen commented on issue #5248: [FRONTEND] tensorflow GatherNd function error
URL: https://github.com/apache/incubator-tvm/issues/5248#issuecomment-609882439
cc @kazum @srkreddy1238 @icemelon9 Would be great if we can look into this.
Also perhaps we should have a discussion in the
tqchen edited a comment on issue #5247: [TIR] Fix lower_warp_memory
URL: https://github.com/apache/incubator-tvm/pull/5247#issuecomment-609873561
Thanks @roastduck ! Indeed we need better test coverage for this pass.
This is
tqchen commented on issue #5240: [CI] Update MxNet to 1.6.0 with MKL
URL: https://github.com/apache/incubator-tvm/pull/5240#issuecomment-609877110
Given that we also have machines with AMD CPUs on the CI fleet, and in the
future introduce other kinds of CPUs.
It would be great it we
shoubhik commented on issue #5230: Adding support for TFLite QnnSubtract
operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-609875364
@FrozenGene @yzhliu Can you take a look at the fix for depthwise conv2d with
1 input channel.
shoubhik commented on issue #5240: [CI] Update MxNet to 1.6.0 with MKL
URL: https://github.com/apache/incubator-tvm/pull/5240#issuecomment-609874843
At this time MKL is only compatible with intel CPUs.
This is an automated
tqchen commented on issue #5245: [TIR] lower_warp_memory not working
URL: https://github.com/apache/incubator-tvm/issues/5245#issuecomment-609874135
Closed by #5247, thanks @roastduck
This is an automated message from the
tqchen closed issue #5245: [TIR] lower_warp_memory not working
URL: https://github.com/apache/incubator-tvm/issues/5245
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
tqchen commented on issue #5247: [TIR] Fix lower_warp_memory
URL: https://github.com/apache/incubator-tvm/pull/5247#issuecomment-609873561
Thanks @roastduck !
This is an automated message from the Apache Git Service.
To
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git
The following commit(s) were added to refs/heads/master by this push:
new f31df01 fix lower_warp_memory (#5247)
tqchen merged pull request #5247: [TIR] Fix lower_warp_memory
URL: https://github.com/apache/incubator-tvm/pull/5247
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
tqchen commented on issue #5238: fix to skip node not in graph.
URL: https://github.com/apache/incubator-tvm/pull/5238#issuecomment-609873168
I am going to merge this for now, given it is a change to the nnvm code.
However, based on my current impression, I am not really sure if such
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git
The following commit(s) were added to refs/heads/master by this push:
new 3e8c7be fix to skip node not in graph.
tqchen merged pull request #5238: fix to skip node not in graph.
URL: https://github.com/apache/incubator-tvm/pull/5238
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
tqchen commented on issue #5240: [CI] Update MxNet to 1.6.0 with MKL
URL: https://github.com/apache/incubator-tvm/pull/5240#issuecomment-609871178
Just to confirm again, does the mxnet-mkl works on machines with AMD cpu?
This
tqchen commented on issue #5240: [CI] Update MxNet to 1.6.0 with MKL
URL: https://github.com/apache/incubator-tvm/pull/5240#issuecomment-609870727
I will reply to do once the CI image is updated, should be in this week
This
tqchen commented on issue #5241: [Relay][OP] Add fast_erf implementation
URL: https://github.com/apache/incubator-tvm/pull/5241#issuecomment-609835385
Let us try to break topi.cc into multiple files grouped by their .h
This
adobay opened a new issue #5248: [FRONTEND] tensorflow GatherNd function error
URL: https://github.com/apache/incubator-tvm/issues/5248
in relay/frontend/tensorflow.py, GatherNd's converter function is relay's
gather_nd, but gather_nd in relay follows the mxnet semantics where each column
roastduck opened a new pull request #5247: [TIR] Fix lower_warp_memory
URL: https://github.com/apache/incubator-tvm/pull/5247
Fixing #5245
Just have a look at the change to `src/tir/transforms/lower_warp_memory.cc`,
you will find the problem at a glance. I have to say this part was
roastduck commented on issue #5245: [TIR] lower_warp_memory not working
URL: https://github.com/apache/incubator-tvm/issues/5245#issuecomment-609771744
Well, `%s/Mutate_/VisitExpr_` solves the problem. I will make a PR.
This
maheshambule opened a new pull request #5246: [Frontend|MXNet] SwapAxis
operator support
URL: https://github.com/apache/incubator-tvm/pull/5246
Reference:
https://beta.mxnet.io/api/ndarray/_autogen/mxnet.ndarray.swapaxes.html
1 - 100 of 114 matches
Mail list logo