insop commented on a change in pull request #7230:
URL: https://github.com/apache/tvm/pull/7230#discussion_r553766990
##
File path: tests/python/frontend/mxnet/test_forward.py
##
@@ -1935,6 +1935,29 @@ def verify(data_shape, axis, use_length, length):
verify((2, 3, 4), 2,
codeislife99 opened a new pull request #7231:
URL: https://github.com/apache/tvm/pull/7231
THis PR adds the above ops to PT Frontend to support customer models.
This is an automated message from the Apache Git Service.
To r
codeislife99 commented on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-756674827
cc: @anijain2305 @trevor-m
This is an automated message from the Apache Git Service.
To respond to the message
masahi commented on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-756693366
cc @t-vi @yongwww
This is probably not how we should support `copy_`. We discussed this issue
before.
Later I found a promising way to support inplace copy: We first co
masahi edited a comment on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-756693366
cc @t-vi @yongwww
This is probably not how we should support `copy_`. We discussed this issue
before.
Later I found a promising way to support inplace copy/assig
d-smirnov commented on pull request #7113:
URL: https://github.com/apache/tvm/pull/7113#issuecomment-756695634
@siju-samuel @FrozenGene
bump
This is an automated message from the Apache Git Service.
To respond to the mess
masahi edited a comment on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-756693366
cc @t-vi @yongwww
This is probably not how we should support `copy_`. We discussed this issue
before.
Later I found a promising way to support inplace copy/assig
t-vi commented on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-756705288
@masahi can you elaborate a bit how you want to do that?
So to my mind there are two parts to this. For slice assignments
```python
@torch.jit.script
def foo(x):
x[:
t-vi edited a comment on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-756705288
@masahi can you elaborate a bit how you want to do that?
So to my mind there are two parts to this. For slice assignments
```python
@torch.jit.script
def foo(x):
masahi commented on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-756716845
For the first case,
```
def foo(x):
x[:5] = 0
return 2 * x
```
Using the two passes I mentioned, I get this graph:
```
graph(%x : Float(10:1, requir
masahi edited a comment on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-756716845
For the first case,
```
def foo(x):
x[:5] = 0
return 2 * x
```
Using the two passes I mentioned, I get this graph:
```
graph(%x : Float(10:1,
t-vi commented on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-756717849
Maybe they tried to take the same matching shortcut internally...
This is an automated message from the Apache Git Serv
masahi commented on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-756720013
I experimented with this approach to support inplace update in torchvision
faster rcnn / mask rcnn:
https://github.com/pytorch/vision/blob/6315358dd06e3a2bcbe9c1e8cdaa10898ac2
masahi edited a comment on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-756720013
I experimented with this approach to support inplace update in torchvision
faster rcnn / mask rcnn:
https://github.com/pytorch/vision/blob/6315358dd06e3a2bcbe9c1e8cdaa1
tqchen merged pull request #7228:
URL: https://github.com/apache/tvm/pull/7228
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the s
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git
The following commit(s) were added to refs/heads/main by this push:
new 29da763 [CI] make sure submodule checkout in clean sta
t-vi commented on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-756764112
I don't want to ruin the party, but does `unsqueeze_` work as is?
We would want to update future use of the input to refer to the output (same
for any of the "simple" inplace).
I
t-vi edited a comment on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-756764112
I don't want to ruin the party, but does `unsqueeze_` work as is?
We would want to update future use of the input to refer to the output (same
for any of the "simple" inplace)
d-smirnov commented on a change in pull request #7206:
URL: https://github.com/apache/tvm/pull/7206#discussion_r553995726
##
File path: python/tvm/relay/op/contrib/arm_compute_lib.py
##
@@ -19,12 +19,15 @@
import numpy as np
import tvm
+import tvm._ffi
Review comment:
d-smirnov commented on a change in pull request #7206:
URL: https://github.com/apache/tvm/pull/7206#discussion_r553995853
##
File path: python/tvm/relay/op/contrib/arm_compute_lib.py
##
@@ -71,6 +74,61 @@ def partition_for_arm_compute_lib(mod, params=None):
return seq(mod)
d-smirnov commented on a change in pull request #7206:
URL: https://github.com/apache/tvm/pull/7206#discussion_r553996079
##
File path: python/tvm/relay/op/contrib/arm_compute_lib.py
##
@@ -71,6 +74,61 @@ def partition_for_arm_compute_lib(mod, params=None):
return seq(mod)
d-smirnov commented on a change in pull request #7206:
URL: https://github.com/apache/tvm/pull/7206#discussion_r553997004
##
File path: src/relay/backend/contrib/arm_compute_lib/codegen.cc
##
@@ -126,7 +127,7 @@ class ACLJSONSerializer : public
backend::contrib::JSONSerializer
lhutton1 commented on a change in pull request #7206:
URL: https://github.com/apache/tvm/pull/7206#discussion_r554048506
##
File path: src/runtime/contrib/arm_compute_lib/acl_runtime.cc
##
@@ -269,6 +268,64 @@ class ACLRuntime : public JSONRuntimeBase {
layer->function = f
mbrookhart commented on pull request #7123:
URL: https://github.com/apache/tvm/pull/7123#issuecomment-756871097
For posperity, @trevor-m and I did some offline debugging yesterday, and
#7229 seems to fix the issue.
This is a
comaniac merged pull request #7226:
URL: https://github.com/apache/tvm/pull/7226
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
This is an automated email from the ASF dual-hosted git repository.
comaniac pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.
from 29da763 [CI] make sure submodule checkout in clean state (#7228)
add 54c995d [AutoScheduler] Do not return naive
comaniac commented on pull request #7226:
URL: https://github.com/apache/tvm/pull/7226#issuecomment-756896071
Thanks @jcf94 @merrymercy
This is an automated message from the Apache Git Service.
To respond to the message, ple
aidamanzano commented on issue #3545:
URL: https://github.com/apache/tvm/issues/3545#issuecomment-756903730
Hello I have checked the discussion on the website[
here](https://discuss.tvm.apache.org/t/error-tvm-build-cmakefiles-cmaketmp-checksymbolexists-c10-fatal-error-opencl-include-dir-not
codeislife99 commented on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-756929498
Thanks for the detailed discussion and uncovering some things which I didn't
know about. I will remove copy_ from this PR since its not as trivial as I
thought it would be.
codeislife99 edited a comment on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-756929498
Thanks for the detailed discussion and uncovering some things which I didn't
know about. I will remove `copy_` from this PR since its not as trivial as I
thought it woul
tqchen commented on pull request #7083:
URL: https://github.com/apache/tvm/pull/7083#issuecomment-756939901
also cc @yzhliu @hzfan @comaniac who might be interested
This is an automated message from the Apache Git Service.
To
ptrendx commented on pull request #6758:
URL: https://github.com/apache/tvm/pull/6758#issuecomment-756960926
I don't think change to the NNVM directory is right - NNVM does not rely on
the rest of TVM so it does not know about `ICHECK` macros and compiling it
results in a bunch of `not dec
electriclilies commented on a change in pull request #7083:
URL: https://github.com/apache/tvm/pull/7083#discussion_r553580601
##
File path: python/tvm/relay/op/random/kernel.py
##
@@ -0,0 +1,134 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contri
tkonolige commented on a change in pull request #7083:
URL: https://github.com/apache/tvm/pull/7083#discussion_r554176881
##
File path: src/relay/op/random/kernel.cc
##
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor l
tkonolige commented on a change in pull request #7083:
URL: https://github.com/apache/tvm/pull/7083#discussion_r554177250
##
File path: python/tvm/relay/op/random/kernel.py
##
@@ -0,0 +1,134 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor
tkonolige commented on a change in pull request #7083:
URL: https://github.com/apache/tvm/pull/7083#discussion_r554178005
##
File path: python/tvm/relay/op/random/kernel.py
##
@@ -0,0 +1,134 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor
comaniac opened a new pull request #7232:
URL: https://github.com/apache/tvm/pull/7232
#7018 introduces the dynamic support to `strided_slice`. However, in the
following case, the static workload will be treat as a dynamic:
```
cpp.strided_slice(weight, begin=[0, 0, 0, 0], end=[No
tqchen commented on pull request #6758:
URL: https://github.com/apache/tvm/pull/6758#issuecomment-756990130
@ptrendx you are right, please send another PR to revert the change in NNVM
part
This is an automated message from t
masahi opened a new pull request #7233:
URL: https://github.com/apache/tvm/pull/7233
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
ymwangg opened a new pull request #7234:
URL: https://github.com/apache/tvm/pull/7234
Currently `autotvm.task.ConfigSpace.get()` can take any number. This PR adds
boundary check as well as fixing some typos.
This is an a
tkonolige commented on a change in pull request #7083:
URL: https://github.com/apache/tvm/pull/7083#discussion_r554185452
##
File path: python/tvm/topi/random/kernel.py
##
@@ -0,0 +1,408 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor lic
d-smirnov commented on a change in pull request #7206:
URL: https://github.com/apache/tvm/pull/7206#discussion_r554187386
##
File path: src/runtime/contrib/arm_compute_lib/acl_runtime.cc
##
@@ -269,6 +268,64 @@ class ACLRuntime : public JSONRuntimeBase {
layer->function =
tkonolige commented on pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#issuecomment-757000179
Would it be a better idea to have to separate scatter implementations (the
parallel one and the sequential one) and let autotvm figure out which is
better? Then we don't have to h
tkonolige edited a comment on pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#issuecomment-757000179
Would it be a better idea to have to separate scatter implementations (the
parallel one and the sequential one) and let autotvm figure out which is
better? Then we don't ha
mbrookhart commented on pull request #7232:
URL: https://github.com/apache/tvm/pull/7232#issuecomment-757002649
I didn't know we could use None. Is there a way to test it in the unit
tests?
I'd like to see a unit test, but otherwise it looks good.
-
masahi commented on pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#issuecomment-757006403
The second text block is an excerpt from the output of `nvprof
--print-gpu-trace`, showing elapsed time, launch config etc of each kernel
executed, in order.
I don't have othe
masahi edited a comment on pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#issuecomment-757006403
The second text block is an excerpt from the output of `nvprof
--print-gpu-trace`, showing elapsed time, launch config etc of each kernel
executed, in order. The first line is
mbrookhart commented on a change in pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#discussion_r554201277
##
File path: python/tvm/topi/cuda/scatter.py
##
@@ -312,19 +313,18 @@ def gen_ir_4d(data, indices, updates, axis, out,
update_func):
out_ptr = ib.bu
mbrookhart commented on a change in pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#discussion_r554201277
##
File path: python/tvm/topi/cuda/scatter.py
##
@@ -312,19 +313,18 @@ def gen_ir_4d(data, indices, updates, axis, out,
update_func):
out_ptr = ib.bu
masahi edited a comment on pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#issuecomment-757006403
The second text block is an excerpt from the output of `nvprof
--print-gpu-trace`, showing elapsed time, launch config etc of each kernel
executed, in order. The first line is
masahi edited a comment on pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#issuecomment-757006403
The second text block is an excerpt from the output of `nvprof
--print-gpu-trace`, showing elapsed time, launch config etc of each kernel
executed, in order. The first line is
masahi edited a comment on pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#issuecomment-757006403
The second text block is an excerpt from the output of `nvprof
--print-gpu-trace`, showing elapsed time, launch config etc of each kernel
executed, in order. The first line is
masahi edited a comment on pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#issuecomment-757006403
The second text block is an excerpt from the output of `nvprof
--print-gpu-trace`, showing elapsed time, launch config etc of each kernel
executed, in order. The first line is
tkonolige commented on pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#issuecomment-757009645
> hmm, this sounds better than picking a random threshold, but do we have
existing uses of autotvm to make such decision? Given that scatter kernels are
extern, I'm not sure if au
masahi commented on a change in pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#discussion_r554204661
##
File path: python/tvm/topi/cuda/scatter.py
##
@@ -312,19 +313,18 @@ def gen_ir_4d(data, indices, updates, axis, out,
update_func):
out_ptr = ib.buffer
tkonolige opened a new pull request #7235:
URL: https://github.com/apache/tvm/pull/7235
make format hardcoded the upstream branch to origin/main. This patch changes
it to ask git for the upstream branch.
@jroesch @tqchen
tkonolige commented on a change in pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#discussion_r554205518
##
File path: python/tvm/topi/cuda/scatter.py
##
@@ -312,19 +313,18 @@ def gen_ir_4d(data, indices, updates, axis, out,
update_func):
out_ptr = ib.buf
comaniac commented on pull request #7232:
URL: https://github.com/apache/tvm/pull/7232#issuecomment-757011786
> I didn't know we could use None. Is there a way to test it in the unit
tests?
>
> I'd like to see a unit test, but otherwise it looks good.
It is supported here:
ht
masahi commented on pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#issuecomment-757016060
Yes, there are 4 calls to 4D scatter in MaskRCNN, the old kernel was taking
11.6 milli seconds on them in total, making it one of the bottlenecks as shown
in the profile above. This
masahi commented on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-757023959
I'm also not sure what inplace unsqueeze, or more generally how well we are
doing with other inplace ops. @codeislife99 Does this change work for your real
model ?
masahi edited a comment on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-757023959
I'm also not sure if inplace unsqueeze can be supported without any concern,
or more generally how well we are doing with other inplace ops. @codeislife99
Does this change wor
masahi edited a comment on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-757023959
I'm also not sure if inplace unsqueeze can be supported without any concern,
or more generally how well we are doing with other inplace ops. @codeislife99
Does this change wor
masahi commented on pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#issuecomment-757032109
> Autotvm does this for external libraries which are all extern, so it will
work here
I like the idea of separating sorting based implementation of scatter, so I
want to try t
masahi edited a comment on pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#issuecomment-757032109
> Autotvm does this for external libraries which are all extern, so it will
work here
@tkonolige I like the idea of separating sorting based implementation of
scatter,
tqchen commented on pull request #7235:
URL: https://github.com/apache/tvm/pull/7235#issuecomment-757032872
@tkonolige wonder what the behavior looks like in the case of two upstreams.
e.g. one that points to the fork and another points to the upstream.
---
t-vi commented on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-757036217
@masahi , @codeislife99 : I mentioned this to one of the other PyTorch devs
and he mentioned that there is a remove mutation pass in PyTorch that should
take care of the cases where th
tkonolige commented on pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#issuecomment-757037556
@masahi Here is an example of having multiple implementations for the same
op, with some of them being external.
https://github.com/apache/tvm/blob/main/python/tvm/relay/op/strate
tkonolige edited a comment on pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#issuecomment-757037556
@masahi Here is an example of having multiple implementations for the same
op, with some of them being external.
https://github.com/apache/tvm/blob/main/python/tvm/relay/op
yzhliu commented on a change in pull request #6370:
URL: https://github.com/apache/tvm/pull/6370#discussion_r554231366
##
File path: include/tvm/topi/einsum.h
##
@@ -0,0 +1,930 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license
yzhliu commented on pull request #6370:
URL: https://github.com/apache/tvm/pull/6370#issuecomment-757044548
pls also rebase and retrigger the ci.
This is an automated message from the Apache Git Service.
To respond to the mes
tkonolige commented on pull request #7235:
URL: https://github.com/apache/tvm/pull/7235#issuecomment-757045420
I don't think you can have two upstreams?
What I originally had was incorrect. My `main` branch is tracking
`apache/tvm` instead of my local repository, which is why it work
merrymercy commented on pull request #7175:
URL: https://github.com/apache/tvm/pull/7175#issuecomment-757051844
This PR broke the layout_transform template used in graph tuner.
https://github.com/apache/tvm/blob/54c995dbf7c96c1184c2baf64de87bc9566fe33a/python/tvm/autotvm/graph_tuner/base
merrymercy edited a comment on pull request #7175:
URL: https://github.com/apache/tvm/pull/7175#issuecomment-757051844
This PR broke the layout_transform template used in graph tuner.
https://github.com/apache/tvm/blob/54c995dbf7c96c1184c2baf64de87bc9566fe33a/python/tvm/autotvm/graph_tun
merrymercy edited a comment on pull request #7175:
URL: https://github.com/apache/tvm/pull/7175#issuecomment-757051844
This PR broke the layout_transform template used in graph tuner.
https://github.com/apache/tvm/blob/54c995dbf7c96c1184c2baf64de87bc9566fe33a/python/tvm/autotvm/graph_tun
This is an automated email from the ASF dual-hosted git repository.
lmzheng pushed a change to branch revert-7175-main
in repository https://gitbox.apache.org/repos/asf/tvm.git.
at afa896a Revert "[AutoTVM-FIX] avoid unexpected value(1) of search
space when get length for uninitiated sea
This is an automated email from the ASF dual-hosted git repository.
lmzheng pushed a commit to branch revert-7175-main
in repository https://gitbox.apache.org/repos/asf/tvm.git
commit afa896a72f413b1556daa764c13a75b5601bca61
Author: Lianmin Zheng
AuthorDate: Fri Jan 8 15:50:44 2021 -0800
Re
merrymercy opened a new pull request #7236:
URL: https://github.com/apache/tvm/pull/7236
Reverts apache/tvm#7175 because it breaks the graph tuner.
see comments https://github.com/apache/tvm/pull/7175#issuecomment-757051844
---
tqchen commented on pull request #7235:
URL: https://github.com/apache/tvm/pull/7235#issuecomment-757053449
Given that format can be done simply via `./tests/lint/git-clang-format.sh
upstream/main`. (where upstream points to the apache/tvm)
I feel it is not necessary to introduce the
tqchen edited a comment on pull request #7235:
URL: https://github.com/apache/tvm/pull/7235#issuecomment-757053449
Given that format can be done simply via `./tests/lint/git-clang-format.sh
upstream/main`. (where upstream points to the apache/tvm)
I feel it is not necessary to introd
tkonolige commented on pull request #7235:
URL: https://github.com/apache/tvm/pull/7235#issuecomment-757055740
`./tests/lint/git-clang-format.sh upstream/main` is incorrect if you are
behind main. Also some people have origin point to apache/tvm and some have it
pointing to their fork. Eve
tqchen commented on pull request #7235:
URL: https://github.com/apache/tvm/pull/7235#issuecomment-757059177
The flow that set an upstream option has been working well so far:
```
git upstream add upstream https://github.com/apache/tvm
./tests/lint/git-clang-format.sh -i upstream
merrymercy commented on pull request #7070:
URL: https://github.com/apache/tvm/pull/7070#issuecomment-757059470
For x86 cpu, it is recommended to use `enable_cpu_cache_flush=True` in
`LocalRunner`, as shown in the tutorial.
https://github.com/apache/tvm/blob/54c995dbf7c96c1184c2baf64de8
tqchen edited a comment on pull request #7235:
URL: https://github.com/apache/tvm/pull/7235#issuecomment-757059177
The flow that set an upstream option has been working well so far:
Setup upstream
```
git upstream add upstream https://github.com/apache/tvm
```
format
tkonolige commented on pull request #7235:
URL: https://github.com/apache/tvm/pull/7235#issuecomment-757059633
Seems like half the places use upstream and half use origin.
This is an automated message from the Apache Git Serv
tqchen commented on pull request #7235:
URL: https://github.com/apache/tvm/pull/7235#issuecomment-757060481
This is because of the difference between CI setting and local. In CI,
origin points to the `apache/tvm`, so the CI script correctly points to
```origin/main```.
When we for
tqchen edited a comment on pull request #7235:
URL: https://github.com/apache/tvm/pull/7235#issuecomment-757060481
This is because of the difference between CI setting and local. In CI,
origin points to the `apache/tvm` and there is no upstream being setup, so the
CI script correctly point
tkonolige commented on pull request #7235:
URL: https://github.com/apache/tvm/pull/7235#issuecomment-757062047
We should definitely use merge-base. I've fine hardcoding upstream/main if
it is the recommended way in the docs.
masahi commented on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-757065521
Interesting, if that can convert `unsqueeze_` to `unsqueeze` when Torch can
prove it safe, that works great for us. @codeislife99 Do you want to try that?
--
tmoreau89 merged pull request #7083:
URL: https://github.com/apache/tvm/pull/7083
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to th
tmoreau89 closed issue #6813:
URL: https://github.com/apache/tvm/issues/6813
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the spe
This is an automated email from the ASF dual-hosted git repository.
moreau pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git
The following commit(s) were added to refs/heads/main by this push:
new 701bcc2 [RELAY,TOPI] Threefry PRNG: splittable and sta
tmoreau89 commented on pull request #7083:
URL: https://github.com/apache/tvm/pull/7083#issuecomment-757077540
Thank you @altanh @electriclilies @tqchen @jwfromm @MarisaKirisame for the
reviews, the PR has been merged.
This
tqchen commented on pull request #7235:
URL: https://github.com/apache/tvm/pull/7235#issuecomment-757078648
can you elaborate a bit? I think that really depends on how the
git-clang-format is implemented.
I ran a quick test on a branch that falls behind main and apply another
commit
tqchen edited a comment on pull request #7235:
URL: https://github.com/apache/tvm/pull/7235#issuecomment-757078648
can you elaborate a bit? I think that really depends on how the
git-clang-format is implemented. If it runs on the diff of the current commit
and the base, then seems we shoul
tqchen commented on a change in pull request #7084:
URL: https://github.com/apache/tvm/pull/7084#discussion_r554276298
##
File path: src/tir/transforms/make_packed_api.cc
##
@@ -41,6 +41,56 @@
namespace tvm {
namespace tir {
+class ReturnRewriter : public StmtMutator {
+ pu
tqchen commented on a change in pull request #7084:
URL: https://github.com/apache/tvm/pull/7084#discussion_r554276298
##
File path: src/tir/transforms/make_packed_api.cc
##
@@ -41,6 +41,56 @@
namespace tvm {
namespace tir {
+class ReturnRewriter : public StmtMutator {
+ pu
lixiaoquan opened a new pull request #7237:
URL: https://github.com/apache/tvm/pull/7237
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
comaniac merged pull request #7232:
URL: https://github.com/apache/tvm/pull/7232
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
This is an automated email from the ASF dual-hosted git repository.
comaniac pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git
The following commit(s) were added to refs/heads/main by this push:
new 02ef6e6 [TOPI] Treat undefined elements as constants
comaniac commented on pull request #7232:
URL: https://github.com/apache/tvm/pull/7232#issuecomment-757089909
Thanks @mbrookhart @yzhliu
This is an automated message from the Apache Git Service.
To respond to the message, pl
100 matches
Mail list logo