kazum commented on a change in pull request #4497: [Relay] Add a PyTorch to
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r375697207
##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -0,0 +1,1023 @@
+# Licensed to the Apache
kazum commented on a change in pull request #4497: [Relay] Add a PyTorch to
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r375698393
##
File path: tests/python/frontend/pytorch/test_forward.py
##
@@ -0,0 +1,792 @@
+# Licensed to the Apache
kazum commented on a change in pull request #4497: [Relay] Add a PyTorch to
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r375696154
##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -0,0 +1,1023 @@
+# Licensed to the Apache
kazum commented on a change in pull request #4497: [Relay] Add a PyTorch to
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r375698797
##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -0,0 +1,1023 @@
+# Licensed to the Apache
uday60 opened a new issue #4830: libnnvm_compiler.so file missing | Commands
inside
URL: https://github.com/apache/incubator-tvm/issues/4830
```
Ubuntu 16.04
Cuda 10.1
```
Steps Followed:
```
git clone --recursive https://github.com/apache/incubator-tvm tvm
cd
GuoliangLiCN opened a new issue #4831: [Relay][FrontEnd][MxNet] incorrect
data_min_idx and data_max_idx in _qnn_conv
URL: https://github.com/apache/incubator-tvm/issues/4831
In the lines 1409-1413 of file python/tvm/relay/frontend/mxnet.py, are there
typos when has_sum is False?
>
clhne opened a new pull request #4832: It's gpu not cpu.
URL: https://github.com/apache/incubator-tvm/pull/4832
I think there is a misstake here:
https://github.com/apache/incubator-tvm/blob/2bd2f9988b09e71178972c136438cab6702e7b89/python/tvm/runtime/ndarray.py#L317
inadob closed pull request #4807: [Frontend][TFLite] Fix quantized pad value
for convolution
URL: https://github.com/apache/incubator-tvm/pull/4807
This is an automated message from the Apache Git Service.
To respond to the
anijain2305 commented on issue #4816: [TFLite] Using real image for QNN testing.
URL: https://github.com/apache/incubator-tvm/pull/4816#issuecomment-582955983
Ping @FrozenGene
This is an automated message from the Apache Git
liangfu commented on issue #3934: [WIP][Runtime] MISRA-C compliant TVM runtime
URL: https://github.com/apache/incubator-tvm/pull/3934#issuecomment-582950448
@tqchen sure, I can continue to push this thread, although this has been
suspended for a while.
alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r376030507
##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -0,0 +1,1023 @@
+# Licensed to the Apache
tqchen merged pull request #4826: [CI][DOCKER] Update ci-gpu torch1.4 and
onnx1.6
URL: https://github.com/apache/incubator-tvm/pull/4826
This is an automated message from the Apache Git Service.
To respond to the message,
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from b88de43 [CI][DOCKER] Update ci-gpu to v0.60 (#4827)
add 9d741ef [CI][DOCKER] Update ci-gpu torch1.4
anijain2305 edited a comment on issue #4828: [QNN][TFLite] TFLite rounding mode
support
URL: https://github.com/apache/incubator-tvm/pull/4828#issuecomment-583004126
Thanks for this PR. It will help reduce/remove those 1-off deviations.
One request that I have is - While setting the
alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r376015202
##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -0,0 +1,1023 @@
+# Licensed to the Apache
alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r376018125
##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -0,0 +1,1023 @@
+# Licensed to the Apache
alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r376018652
##
File path: tests/python/frontend/pytorch/test_forward.py
##
@@ -0,0 +1,792 @@
+# Licensed to the
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 62543d4 It's gpu not cpu. (#4832)
add b88de43 [CI][DOCKER] Update ci-gpu to v0.60 (#4827)
No new
tqchen merged pull request #4827: [CI][DOCKER] Update ci-gpu to v0.60
URL: https://github.com/apache/incubator-tvm/pull/4827
This is an automated message from the Apache Git Service.
To respond to the message, please log on
icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r375982082
##
File path: python/tvm/relay/op/strategy/x86.py
##
@@ -0,0 +1,277 @@
+# Licensed to the Apache Software Foundation
tqchen commented on issue #4824: Tflite frontend needs to use zero point of
input tensor while lowering qnn.conv2d for padding
URL: https://github.com/apache/incubator-tvm/issues/4824#issuecomment-583022408
Let us mark the related PRs with [NEED-BACKPORT] tag, and then we can
backport to
anijain2305 commented on issue #4828: [QNN][TFLite] TFLite rounding mode support
URL: https://github.com/apache/incubator-tvm/pull/4828#issuecomment-583004126
Thanks for this PR. It will help reduce/remove those 1-off deviations.
One request that I have is - While setting the
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 2bd2f99 [TOPI][Relay] Add bitwise ops (#4815)
add 62543d4 It's gpu not cpu. (#4832)
No new revisions
tqchen merged pull request #4832: It's gpu not cpu.
URL: https://github.com/apache/incubator-tvm/pull/4832
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r376057894
##
File path: python/tvm/relay/op/strategy/hls.py
##
@@ -0,0 +1,151 @@
+# Licensed to the Apache Software Foundation
icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r376057778
##
File path: include/tvm/relay/op_attr_types.h
##
@@ -207,14 +216,182 @@ enum AnyCodegenStrategy {
icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r376057894
##
File path: python/tvm/relay/op/strategy/hls.py
##
@@ -0,0 +1,151 @@
+# Licensed to the Apache Software Foundation
zhiics commented on a change in pull request #4497: [Relay] Add a PyTorch to
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r376131300
##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -0,0 +1,1045 @@
+# Licensed to the Apache
zhiics commented on a change in pull request #4497: [Relay] Add a PyTorch to
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r376131344
##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -0,0 +1,1045 @@
+# Licensed to the Apache
zhiics commented on a change in pull request #4497: [Relay] Add a PyTorch to
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r376129550
##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -0,0 +1,1045 @@
+# Licensed to the Apache
zhiics commented on a change in pull request #4497: [Relay] Add a PyTorch to
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r376133245
##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -0,0 +1,1045 @@
+# Licensed to the Apache
icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r376142732
##
File path: python/tvm/relay/op/strategy/x86.py
##
@@ -0,0 +1,277 @@
+# Licensed to the Apache Software Foundation
masahi commented on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-583164182
> Sure, I removed it but should it be
>
> `if (version.parse(torch.__version__) >= version.parse("1.4.0")):
tqchen commented on issue #: [TEST][FLAKY] test_adaptive_pool FAILED
URL: https://github.com/apache/incubator-tvm/issues/#issuecomment-583201755
#4836
This is an automated message from the Apache Git Service.
To
alexwong edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-583157323
Currently it seems there are some unrelated CI issues?
I lowered more input sizes to see if it has something to do with
icemelon9 commented on a change in pull request #4497: [Relay] Add a PyTorch to
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r376214676
##
File path: tests/python/frontend/pytorch/test_forward.py
##
@@ -0,0 +1,767 @@
+# Licensed to the
icemelon9 commented on a change in pull request #4497: [Relay] Add a PyTorch to
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r376214786
##
File path: tests/python/frontend/pytorch/test_forward.py
##
@@ -0,0 +1,767 @@
+# Licensed to the
zhiics commented on issue #4564: [Doc] Introduction to module serialization
URL: https://github.com/apache/incubator-tvm/pull/4564#issuecomment-583199805
Thanks everyone. This is now merged.
This is an automated message from
This is an automated email from the ASF dual-hosted git repository.
zhic pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 85f7e0a Fix doc after moving to unified IR (#4835)
add d2cc214 [Doc] Introduction to module
FrozenGene commented on issue #4822: [Frontend][TFLite] Add MIRROR_PAD operator
URL: https://github.com/apache/incubator-tvm/pull/4822#issuecomment-583233499
@u99127 @inadob Please approve explicitly if you think this PR is good now.
tqchen merged pull request #4835: [doc] Fix doc after moving to unified IR
URL: https://github.com/apache/incubator-tvm/pull/4835
This is an automated message from the Apache Git Service.
To respond to the message, please
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 9d741ef [CI][DOCKER] Update ci-gpu torch1.4 and onnx1.6 (#4826)
add 85f7e0a Fix doc after moving to
soiferj commented on a change in pull request #4825: [Frontend][ONNX] LSTM
Support
URL: https://github.com/apache/incubator-tvm/pull/4825#discussion_r376144976
##
File path: python/tvm/relay/frontend/onnx.py
##
@@ -1190,6 +1250,145 @@ def expand_shape(in_shape, shape):
tqchen commented on issue #4830: libnnvm_compiler.so file missing | Commands
inside
URL: https://github.com/apache/incubator-tvm/issues/4830#issuecomment-583194780
please open a new thread on https://discuss.tvm.ai libnnvm_compiler is
deprecated and is no longer needed in the current
tqchen closed issue #4830: libnnvm_compiler.so file missing | Commands inside
URL: https://github.com/apache/incubator-tvm/issues/4830
This is an automated message from the Apache Git Service.
To respond to the message,
FrozenGene commented on issue #4816: [TFLite] Using real image for QNN testing.
URL: https://github.com/apache/incubator-tvm/pull/4816#issuecomment-583217165
ping @inadob , would you review it again? If you have no other comments,
would you set explicit approve? Thanks?
masahi edited a comment on issue #4756: [Docker] update onnx to 1.6 and torch
to 1.4
URL: https://github.com/apache/incubator-tvm/pull/4756#issuecomment-583124373
@tqchen can we send torch 1.4 and onnx 1.6 dependent changes now that #4827
and #4826 were merged? It seems only `ci-gpu` was
tqchen commented on issue #4756: [Docker] update onnx to 1.6 and torch to 1.4
URL: https://github.com/apache/incubator-tvm/pull/4756#issuecomment-583153220
Only ci-cpu is updated atm. Let me know if we also need to update ci-cpu
tqchen opened a new pull request #4836: Improve tol to resolve flaky case
URL: https://github.com/apache/incubator-tvm/pull/4836
https://github.com/apache/incubator-tvm/issues/
This is an automated message from the Apache
tqchen closed pull request #4756: [Docker] update onnx to 1.6 and torch to 1.4
URL: https://github.com/apache/incubator-tvm/pull/4756
This is an automated message from the Apache Git Service.
To respond to the message,
alexwong commented on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-583162272
> @alexwong CI has been updated (see #4826, #4827). Can you try remove torch
version check and see what happens?
Sure, I removed it
masahi commented on issue #4756: [Docker] update onnx to 1.6 and torch to 1.4
URL: https://github.com/apache/incubator-tvm/pull/4756#issuecomment-583163841
does `ci-cpu` run on CI to test PRs? If so I cannot send torch 1.4 and onnx
1.6 dependent changes.
tqchen opened a new pull request #4837: [REFACTOR][PY][API-Change] Polish
tvm.runtime, tvm.runtime.module API update
URL: https://github.com/apache/incubator-tvm/pull/4837
This PR updates the tvm.runtime to use the new FFI style.
- Remove top-level tvm.module to avoid confusion
tqchen commented on issue #4837: [REFACTOR][PY][API-Change] Polish tvm.runtime,
tvm.runtime.module API update
URL: https://github.com/apache/incubator-tvm/pull/4837#issuecomment-583188351
cc @FrozenGene @icemelon9 @yzhliu @ZihengJiang
zhiics commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r376144257
##
File path: include/tvm/relay/op_attr_types.h
##
@@ -207,13 +216,137 @@ enum AnyCodegenStrategy {
zhiics commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r375569255
##
File path: include/tvm/te/schedule.h
##
@@ -742,6 +743,55 @@ class SingletonNode : public IterVarRelationNode {
zhiics commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r376167957
##
File path: python/tvm/relay/backend/compile_engine.py
##
@@ -63,6 +85,317 @@ def _get_cache_key(source_func, target):
zhiics commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r376171923
##
File path: python/tvm/schedule.py
##
@@ -650,4 +650,38 @@ def opengl(self):
"""
zhiics commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r376144115
##
File path: include/tvm/relay/op_attr_types.h
##
@@ -207,13 +216,137 @@ enum AnyCodegenStrategy {
zhiics commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r376162304
##
File path: include/tvm/te/schedule.h
##
@@ -742,6 +743,55 @@ class SingletonNode : public IterVarRelationNode {
tqchen closed issue #: [TEST][FLAKY] test_adaptive_pool FAILED
URL: https://github.com/apache/incubator-tvm/issues/
This is an automated message from the Apache Git Service.
To respond to the message, please log on
tqchen merged pull request #4836: Improve tol to resolve flaky case
URL: https://github.com/apache/incubator-tvm/pull/4836
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 75e9f5d [Frontend][ONNX] LSTM Support (#4825)
add e578777 Improve tol to resolve flaky case (#4836)
masahi commented on issue #4825: [Frontend][ONNX] LSTM Support
URL: https://github.com/apache/incubator-tvm/pull/4825#issuecomment-583201146
Thanks @jwfromm @soiferj @mbrookhart
This is an automated message from the Apache
jwfromm commented on issue #4834: [Doc] ConvertLayout - Call
RemoveUnunsedFunctions.
URL: https://github.com/apache/incubator-tvm/pull/4834#issuecomment-583262253
Makes sense, LGTM
This is an automated message from the
jwfromm commented on a change in pull request #4825: [Frontend][ONNX] LSTM
Support
URL: https://github.com/apache/incubator-tvm/pull/4825#discussion_r376137993
##
File path: tests/python/frontend/onnx/test_forward.py
##
@@ -1962,6 +1962,126 @@ def test_pooling():
masahi commented on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-583159644
@alexwong CI has been updated (see
https://github.com/apache/incubator-tvm/pull/4826,
https://github.com/apache/incubator-tvm/pull/4827).
tqchen commented on issue #4756: [Docker] update onnx to 1.6 and torch to 1.4
URL: https://github.com/apache/incubator-tvm/pull/4756#issuecomment-583190431
I believe it runs unit-tests but not integration(frontend tests)
This
zhiics merged pull request #4564: [Doc] Introduction to module serialization
URL: https://github.com/apache/incubator-tvm/pull/4564
This is an automated message from the Apache Git Service.
To respond to the message, please
FrozenGene commented on issue #4837: [REFACTOR][PY][API-Change] Polish
tvm.runtime, tvm.runtime.module API update
URL: https://github.com/apache/incubator-tvm/pull/4837#issuecomment-583216789
As https://github.com/apache/incubator-tvm/pull/4564 is merged, but the doc
still use
FrozenGene edited a comment on issue #4828: [QNN][TFLite] TFLite rounding mode
support
URL: https://github.com/apache/incubator-tvm/pull/4828#issuecomment-583218692
> One request that I have is - While setting the rounding mode to TFLite in
TFLite parser, it might be better to set it by
zhiics opened a new pull request #4835: [doc] Fix doc after moving to unified IR
URL: https://github.com/apache/incubator-tvm/pull/4835
Some docs need to be refactored after moving to the object protocol and
unified IR.
https://discuss.tvm.ai/t/where-is-pass-manager-cc/5579
tqchen commented on issue #4837: [REFACTOR][PY][API-Change] Polish tvm.runtime,
tvm.runtime.module API update
URL: https://github.com/apache/incubator-tvm/pull/4837#issuecomment-583218419
@FrozenGene Good catch, i have updated the PR to reflect the lastest set of
changes
FrozenGene commented on issue #4828: [QNN][TFLite] TFLite rounding mode support
URL: https://github.com/apache/incubator-tvm/pull/4828#issuecomment-583218692
> One request that I have is - While setting the rounding mode to TFLite in
TFLite parser, it might be better to set it by adding an
FrozenGene edited a comment on issue #4828: [QNN][TFLite] TFLite rounding mode
support
URL: https://github.com/apache/incubator-tvm/pull/4828#issuecomment-583218692
> One request that I have is - While setting the rounding mode to TFLite in
TFLite parser, it might be better to set it by
LiangHao151941 commented on a change in pull request #4828: [QNN][TFLite]
TFLite rounding mode support
URL: https://github.com/apache/incubator-tvm/pull/4828#discussion_r376249483
##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -1212,6 +1214,7 @@ def
LiangHao151941 commented on issue #4828: [QNN][TFLite] TFLite rounding mode
support
URL: https://github.com/apache/incubator-tvm/pull/4828#issuecomment-583271045
Some update on further experiments @FrozenGene @anijain2305
> 1. if we have TFLITE rounding support, we should make
alexwong commented on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-583157323
Currently it seems there are some unrelated CI issues?
This is an
alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r376135176
##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -0,0 +1,1045 @@
+# Licensed to the Apache
alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r376135103
##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -0,0 +1,1045 @@
+# Licensed to the Apache
This is an automated email from the ASF dual-hosted git repository.
masahi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git
The following commit(s) were added to refs/heads/master by this push:
new 75e9f5d [Frontend][ONNX] LSTM Support
masahi merged pull request #4825: [Frontend][ONNX] LSTM Support
URL: https://github.com/apache/incubator-tvm/pull/4825
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
zhiics commented on a change in pull request #4459: [RUNTIME] Implement
TVMDSOOp(TensorFlow custom op) for TVM runtime
URL: https://github.com/apache/incubator-tvm/pull/4459#discussion_r375651019
##
File path: src/codegen/build_module.cc
##
@@ -572,6 +572,7 @@
FrozenGene commented on a change in pull request #4828: [QNN][TFLite] TFLite
rounding mode support
URL: https://github.com/apache/incubator-tvm/pull/4828#discussion_r376237587
##
File path: src/relay/qnn/util.cc
##
@@ -22,13 +22,49 @@
* \brief Utility functions for QNN.
FrozenGene commented on a change in pull request #4828: [QNN][TFLite] TFLite
rounding mode support
URL: https://github.com/apache/incubator-tvm/pull/4828#discussion_r376237528
##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -1212,6 +1214,7 @@ def
FrozenGene commented on a change in pull request #4828: [QNN][TFLite] TFLite
rounding mode support
URL: https://github.com/apache/incubator-tvm/pull/4828#discussion_r376237772
##
File path: src/relay/qnn/util.cc
##
@@ -22,13 +22,49 @@
* \brief Utility functions for QNN.
anijain2305 commented on issue #4828: [QNN][TFLite] TFLite rounding mode support
URL: https://github.com/apache/incubator-tvm/pull/4828#issuecomment-583275379
I thought little more about the bit exact problem. One source of discrepancy
for certain is QNN add, and QNN concatenate ops. These
anijain2305 commented on issue #4828: [QNN][TFLite] TFLite rounding mode support
URL: https://github.com/apache/incubator-tvm/pull/4828#issuecomment-583275888
> If we think it is import, we even could port this back to release 0.6, one
PR will make us do it easily too.
That is ok.
LiangHao151941 commented on a change in pull request #4828: [QNN][TFLite]
TFLite rounding mode support
URL: https://github.com/apache/incubator-tvm/pull/4828#discussion_r376252855
##
File path: src/relay/qnn/util.cc
##
@@ -22,13 +22,49 @@
* \brief Utility functions for
jwfromm commented on a change in pull request #4825: [Frontend][ONNX] LSTM
Support
URL: https://github.com/apache/incubator-tvm/pull/4825#discussion_r376092067
##
File path: python/tvm/relay/frontend/onnx.py
##
@@ -1190,6 +1250,145 @@ def expand_shape(in_shape, shape):
samwyi opened a new pull request #4833: Update CONTRIBUTORS.md
URL: https://github.com/apache/incubator-tvm/pull/4833
Thanks for contributing to TVM! Please refer to guideline
https://docs.tvm.ai/contribute/ for useful information and tips. After the pull
request is submitted, please
masahi commented on issue #4756: [Docker] update onnx to 1.6 and torch to 1.4
URL: https://github.com/apache/incubator-tvm/pull/4756#issuecomment-583124373
@tqchen can we send torch 1.4 and onnx 1.6 dependent changes now that #4827
and #4826 were merged?
Also this PR can be closed.
masahi commented on issue #4833: Update CONTRIBUTORS.md
URL: https://github.com/apache/incubator-tvm/pull/4833#issuecomment-583125128
This is not how it works. When you make a first PR, you can update the
contributor list to have your name.
I'm responsible for reviewing rocm
masahi closed pull request #4833: Update CONTRIBUTORS.md
URL: https://github.com/apache/incubator-tvm/pull/4833
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
anijain2305 opened a new pull request #4834: [Doc] ConvertLayout - Call
RemoveUnunsedFunctions.
URL: https://github.com/apache/incubator-tvm/pull/4834
As Title.
@yzhliu @jwfromm
This is an automated message from
95 matches
Mail list logo