kevinthesun commented on a change in pull request #7361:
URL: https://github.com/apache/tvm/pull/7361#discussion_r566327256
##
File path: src/relay/backend/vm/compiler.cc
##
@@ -985,8 +985,11 @@ transform::Sequential MemoryOpt(tvm::Target host_target,
TargetsMap targets) {
areusch commented on a change in pull request #7289:
URL: https://github.com/apache/tvm/pull/7289#discussion_r566336625
##
File path: python/gen_requirements.py
##
@@ -0,0 +1,391 @@
+#!/usr/bin/env python3
+# Licensed to the Apache Software Foundation (ASF) under one
+# or
masahi commented on pull request #7334:
URL: https://github.com/apache/tvm/pull/7334#issuecomment-769335323
@ybai62868 please post this to the discuss forum or open an issue,
clarifying what exactly you mean by "it still does not work"
areusch commented on a change in pull request #7289:
URL: https://github.com/apache/tvm/pull/7289#discussion_r566353673
##
File path: python/gen_requirements.py
##
@@ -0,0 +1,391 @@
+#!/usr/bin/env python3
+# Licensed to the Apache Software Foundation (ASF) under one
+# or
kevinthesun commented on pull request #7361:
URL: https://github.com/apache/tvm/pull/7361#issuecomment-769412338
@mbrookhart Thanks. It would be great if you can try tf ssd and fasterrcnn
so that we can ensure there is no regression for tf models as well.
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git.
from 0c94604 include build prefix
add ae9dcb0 switch to python script for expanding globs
No new
mbrookhart opened a new pull request #7361:
URL: https://github.com/apache/tvm/pull/7361
@jroesch @masahi
As discussed, this disables MemoryPlan in the VM until we can rewrite it to
do full reuse planning. The current pass slows down compilation a lot without
providing a strong
tqchen commented on issue #7363:
URL: https://github.com/apache/tvm/issues/7363#issuecomment-769396122
cc @zhiics @masahi
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
tqchen opened a new issue #7363:
URL: https://github.com/apache/tvm/issues/7363
Seems to be quite frequent in recent PRs, might be related to #7346
https://ci.tlcpack.ai/job/tvm/job/main/495/execution/node/449/log/
electriclilies edited a comment on pull request #7362:
URL: https://github.com/apache/tvm/pull/7362#issuecomment-769411425
LGTM!
This is an automated message from the Apache Git Service.
To respond to the message, please log
masahi edited a comment on issue #7363:
URL: https://github.com/apache/tvm/issues/7363#issuecomment-769424317
So this a flaky segfault? Not sure how I can reproduce this. Is there a
common characteristics in nodes that failed? If one node fails, does it fail
consistently?
masahi commented on issue #7363:
URL: https://github.com/apache/tvm/issues/7363#issuecomment-769424317
So this a flaky segfault? Not sure how I can reproduce this.
This is an automated message from the Apache Git Service.
This is an automated email from the ASF dual-hosted git repository.
jroesch pushed a change to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git.
from c0f46b6 revert attempt to use globs in pack_libs, switch to building
standalone_crt
add e85c94c
masahi edited a comment on issue #7363:
URL: https://github.com/apache/tvm/issues/7363#issuecomment-769429948
It shouldn't take more than a few minutes. It should definitely run much
faster than TF SSD test, which takes like 20 min (done in a separate thread
mbrookhart commented on pull request #7362:
URL: https://github.com/apache/tvm/pull/7362#issuecomment-769430372
Thanks @jwfromm @electriclilies
This is an automated message from the Apache Git Service.
To respond to the
kevinthesun commented on pull request #7361:
URL: https://github.com/apache/tvm/pull/7361#issuecomment-769469662
@mbrookhart Sure. You can refer to [tf ssd integration
test](https://github.com/apache/tvm/blob/main/tests/python/frontend/tensorflow/test_forward.py#L3090).
mbrookhart opened a new pull request #7368:
URL: https://github.com/apache/tvm/pull/7368
I recently spent a lot of time fighting dynamic rank issues in a kind of
crazy ONNX model. Fixing it required doing incremental dynamic-to-static before
type inference. This PR basically changes the
masahi edited a comment on issue #7363:
URL: https://github.com/apache/tvm/issues/7363#issuecomment-769429948
It shouldn't take more than a few minutes. It should run much faster than
TF SSD test, which takes like 20 min (done in a separate thread
CircleSpin opened a new pull request #7366:
URL: https://github.com/apache/tvm/pull/7366
Currently tvmc does not allow users to overriding specific shapes (such as
batch sizes) and instead uses what is defined within the model file. In most
cases this is most practical, but in there are
mbrookhart commented on pull request #7361:
URL: https://github.com/apache/tvm/pull/7361#issuecomment-769452097
@kevinthesun I'd be happy to do some testing. Do you have scripts for
running those models? I'm not finding them in the tutorials.
altanh commented on issue #7339:
URL: https://github.com/apache/tvm/issues/7339#issuecomment-769340016
@masahi
```import numpy as np
import tvm
from tvm import relay
x = relay.var("x", shape=(3, 4), dtype="float32")
y = relay.clip(x, 0, np.inf)
f =
jwfromm opened a new pull request #7364:
URL: https://github.com/apache/tvm/pull/7364
I encountered a bug due to our importer using int32 for topk rather than
onnx's specified int64. To prevent similar errors, I've added an output
datatype check to our onnx tests. This check uncovered a
electriclilies commented on pull request #7362:
URL: https://github.com/apache/tvm/pull/7362#issuecomment-769411425
LGTM, I just noticed that the dynamic FullRel also doesn't return false if
fill_shape is null. Might be good to fix that in this PR too.
masahi commented on pull request #7364:
URL: https://github.com/apache/tvm/pull/7364#issuecomment-769425946
please fix the lint issue
This is an automated message from the Apache Git Service.
To respond to the message,
mbrookhart merged pull request #7362:
URL: https://github.com/apache/tvm/pull/7362
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
This is an automated email from the ASF dual-hosted git repository.
mbrookhart pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git
The following commit(s) were added to refs/heads/main by this push:
new b8ad146 [Relay] Type Relation Fixes (#7362)
masahi commented on issue #7363:
URL: https://github.com/apache/tvm/issues/7363#issuecomment-769429948
It should definitely run much faster than TF SSD test.
Disabled one of rewrites in https://github.com/apache/tvm/pull/7365. The
other rewrite added in #7346 should be harmless.
mdw-octoml commented on pull request #7367:
URL: https://github.com/apache/tvm/pull/7367#issuecomment-769447656
@jroesch @tkonolige
This is an automated message from the Apache Git Service.
To respond to the message, please
mdw-octoml opened a new pull request #7367:
URL: https://github.com/apache/tvm/pull/7367
A few small docstring fixes.
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
comaniac commented on pull request #7366:
URL: https://github.com/apache/tvm/pull/7366#issuecomment-769456612
Agree with @leandron. We can actually just keep one PR for both purposes,
because PyTorch fix is more like a corner case of supporting custom shape.
tkonolige commented on a change in pull request #7313:
URL: https://github.com/apache/tvm/pull/7313#discussion_r566479973
##
File path: python/tvm/auto_scheduler/measure.py
##
@@ -943,18 +1047,36 @@ def _timed_rpc_run(
if error_no == 0:
try:
-args =
altanh edited a comment on issue #7339:
URL: https://github.com/apache/tvm/issues/7339#issuecomment-769340016
@masahi
```
import numpy as np
import tvm
from tvm import relay
x = relay.var("x", shape=(3, 4), dtype="float32")
y = relay.clip(x, 0, np.inf)
f =
junrushao1994 merged pull request #7152:
URL: https://github.com/apache/tvm/pull/7152
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
This is an automated email from the ASF dual-hosted git repository.
junrushao pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.
from 67acad3 [Relay][PatternLang] Bug fix of rewrite func attr (#7358)
add f17cba7 [RUNTIME] Improve error messages
junrushao1994 commented on pull request #7152:
URL: https://github.com/apache/tvm/pull/7152#issuecomment-769407463
Thank you @tkonolige @tqchen @areusch! It is now merged!
This is an automated message from the Apache Git
icemelon9 commented on a change in pull request #7361:
URL: https://github.com/apache/tvm/pull/7361#discussion_r566425399
##
File path: src/relay/backend/vm/compiler.cc
##
@@ -985,8 +985,11 @@ transform::Sequential MemoryOpt(tvm::Target host_target,
TargetsMap targets) {
masahi edited a comment on issue #7363:
URL: https://github.com/apache/tvm/issues/7363#issuecomment-769424317
So is this a flaky segfault? Not sure how I can reproduce this. Are there
common characteristics in nodes that failed (What GPU, which CUDA version etc)?
If one node fails, does
masahi edited a comment on issue #7363:
URL: https://github.com/apache/tvm/issues/7363#issuecomment-769424317
So is this a flaky segfault? Not sure how I can reproduce this. Are there a
common characteristics in nodes that failed (What GPU, which CUDA version etc)?
If one node fails, does
tqchen commented on issue #7363:
URL: https://github.com/apache/tvm/issues/7363#issuecomment-769428807
based on my current read it could also be a timeout. In which case we need
to look into if the detection model itself runs too long and if we could build
a faster unittest
masahi opened a new pull request #7365:
URL: https://github.com/apache/tvm/pull/7365
Let's see if this would mitigate flaky segfault problem
https://github.com/apache/tvm/issues/7363
@tqchen @zhiics
This is an
CircleSpin commented on pull request #7366:
URL: https://github.com/apache/tvm/pull/7366#issuecomment-769432247
@leandron @jwfromm @mbrookhart
Could you take a look at this PR and let me know what you think?
This is
jcf94 commented on a change in pull request #7313:
URL: https://github.com/apache/tvm/pull/7313#discussion_r565944345
##
File path: tutorials/auto_scheduler/tune_sparse_x86.py
##
@@ -0,0 +1,331 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more
rafzi closed pull request #7349:
URL: https://github.com/apache/tvm/pull/7349
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
rafzi commented on pull request #7349:
URL: https://github.com/apache/tvm/pull/7349#issuecomment-768935106
hi @areusch
thanks for the clarification. i also saw the comment in the tutorial now. if
i follow the instruction and remove the call to `MemoryManagerCreate`, the
command
giuseros commented on a change in pull request #7345:
URL: https://github.com/apache/tvm/pull/7345#discussion_r565990717
##
File path: python/tvm/relay/analysis/analysis.py
##
@@ -448,3 +449,71 @@ def get_calibration_data(mod, data):
calib_data[gvar] = value
ybai62868 commented on pull request #7334:
URL: https://github.com/apache/tvm/pull/7334#issuecomment-768924014
typo -> "CumSum": AttrCvt("cumsum",{"axis": "axis", "dtype": "dtye"}),"
This is an automated message from the
ybai62868 commented on pull request #7334:
URL: https://github.com/apache/tvm/pull/7334#issuecomment-768923803
Hi @masahi ,
Thanks for your contribution for the "cumsum". I want to use this op in onnx
model.
I meet the problem when I want to compile an onnx model with "CumSum" op to
kevinthesun commented on pull request #7361:
URL: https://github.com/apache/tvm/pull/7361#issuecomment-769487309
LGTM
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
masahi commented on pull request #7368:
URL: https://github.com/apache/tvm/pull/7368#issuecomment-769520559
Since you always run `PrepareArgs` when you find a dynamic op, I'd run
`PrepareArgs` here
comaniac commented on pull request #7366:
URL: https://github.com/apache/tvm/pull/7366#issuecomment-769522432
@CircleSpin @ekalda could you folks organize a plan to work on the one PR
and close the other?
This is an
masahi commented on a change in pull request #7368:
URL: https://github.com/apache/tvm/pull/7368#discussion_r566530421
##
File path: src/relay/transforms/dynamic_to_static.cc
##
@@ -32,29 +32,57 @@
namespace tvm {
namespace relay {
+Expr PrepareInput(const Expr& expr) {
+
Laurawly commented on a change in pull request #7188:
URL: https://github.com/apache/tvm/pull/7188#discussion_r566548443
##
File path: python/tvm/topi/cuda/ssd/multibox.py
##
@@ -341,47 +341,25 @@ def transform_loc(loc, loc_base_idx, anchor,
anchor_base_idx, clip, vx, vy, vw,
masahi opened a new pull request #7370:
URL: https://github.com/apache/tvm/pull/7370
Fixes https://github.com/apache/tvm/issues/7339
Not sure if this is the best solution
```
def @main(%x: Tensor[(3, 4), float32]) -> Tensor[(3, 4), float32] {
clip(%x, a_min=0.5f,
masahi merged pull request #7364:
URL: https://github.com/apache/tvm/pull/7364
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
This is an automated email from the ASF dual-hosted git repository.
masahi pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.
from f7275f9 Some docstring fixes. (#7367)
add f7862e7 [Relay][Frontend[Onnx] Add testing for output datatypes and
fix
masahi commented on pull request #7364:
URL: https://github.com/apache/tvm/pull/7364#issuecomment-769584258
thanks @jwfromm @mbrookhart
This is an automated message from the Apache Git Service.
To respond to the message,
slyubomirsky commented on pull request #7318:
URL: https://github.com/apache/tvm/pull/7318#issuecomment-769519345
Based on the discussion in the thread, it seems the final output of the let
note should have a compiler_end annotation on it (the intended tagging behavior
is described
masahi commented on pull request #7368:
URL: https://github.com/apache/tvm/pull/7368#issuecomment-769521672
cc @t-vi who might be interested in incremental type inference
This is an automated message from the Apache Git
zhiics commented on a change in pull request #7369:
URL: https://github.com/apache/tvm/pull/7369#discussion_r566548070
##
File path: src/relay/transforms/memory_alloc.cc
##
@@ -0,0 +1,508 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more
altanh commented on a change in pull request #7370:
URL: https://github.com/apache/tvm/pull/7370#discussion_r566579870
##
File path: src/parser/tokenizer.h
##
@@ -521,6 +521,12 @@ struct Tokenizer {
}
auto span = SpanFrom(line, col);
+
+ if (keyword ==
zhiics commented on pull request #7369:
URL: https://github.com/apache/tvm/pull/7369#issuecomment-769478145
@masahi oops, I missed a commit. We should be able to remove it
This is an automated message from the Apache Git
masahi commented on pull request #7369:
URL: https://github.com/apache/tvm/pull/7369#issuecomment-769477894
Do we want to remove the python version?
This is an automated message from the Apache Git Service.
To respond to the
jcf94 commented on a change in pull request #7313:
URL: https://github.com/apache/tvm/pull/7313#discussion_r566526531
##
File path: python/tvm/auto_scheduler/measure.py
##
@@ -719,6 +722,87 @@ def local_builder_build(inputs, timeout, n_parallel,
build_func="default", verbo
zhiics commented on a change in pull request #7369:
URL: https://github.com/apache/tvm/pull/7369#discussion_r566548766
##
File path: src/relay/transforms/memory_alloc.cc
##
@@ -0,0 +1,508 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more
slyubomirsky commented on pull request #7318:
URL: https://github.com/apache/tvm/pull/7318#issuecomment-769545493
@comaniac @zhiics I'm not sure I quite understand how let nodes are being
handled in the code. I think the reason for the weird behavior in the given
test case is that when
altanh commented on pull request #7370:
URL: https://github.com/apache/tvm/pull/7370#issuecomment-769572147
Can you use `std::numeric_limits::infinity()`? I'm not sure there is a
difference between single vs double precision inf when it gets casted to the
correct dtype, as floating point
This is an automated email from the ASF dual-hosted git repository.
masahi pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.
from ef032b3 Remove MemoryPlan from VM passes (#7361)
add f7275f9 Some docstring fixes. (#7367)
No new revisions were
masahi commented on a change in pull request #7369:
URL: https://github.com/apache/tvm/pull/7369#discussion_r566516044
##
File path: src/relay/transforms/memory_alloc.cc
##
@@ -0,0 +1,508 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more
masahi commented on a change in pull request #7369:
URL: https://github.com/apache/tvm/pull/7369#discussion_r566517364
##
File path: src/relay/transforms/memory_alloc.cc
##
@@ -0,0 +1,508 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more
masahi commented on a change in pull request #7369:
URL: https://github.com/apache/tvm/pull/7369#discussion_r566517508
##
File path: src/relay/transforms/memory_alloc.cc
##
@@ -0,0 +1,508 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more
masahi commented on a change in pull request #7369:
URL: https://github.com/apache/tvm/pull/7369#discussion_r566521642
##
File path: src/relay/transforms/memory_alloc.cc
##
@@ -0,0 +1,508 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more
masahi commented on pull request #7368:
URL: https://github.com/apache/tvm/pull/7368#issuecomment-769521367
Can we assume that compile time regression is the worst for BERT? I don't
recall infer type or fold constant being slow on other models.
masahi commented on a change in pull request #7369:
URL: https://github.com/apache/tvm/pull/7369#discussion_r566551285
##
File path: src/relay/transforms/memory_alloc.cc
##
@@ -0,0 +1,508 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more
masahi commented on pull request #7370:
URL: https://github.com/apache/tvm/pull/7370#issuecomment-769597522
> BTW just to confirm, this fix should correctly handle the case of
`-np.inf` right? Might be worth adding a test case to be sure.
@altanh Supporting -inf was a bit tricky,
zhiics opened a new pull request #7369:
URL: https://github.com/apache/tvm/pull/7369
This PR ports the memory manifest pass to C++. This would help reduce
compilation time for large models.
@masahi @icemelon9 @anijain2305 @jroesch @mbrookhart @kevinthesun
masahi commented on a change in pull request #7369:
URL: https://github.com/apache/tvm/pull/7369#discussion_r566516044
##
File path: src/relay/transforms/memory_alloc.cc
##
@@ -0,0 +1,508 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more
zhiics commented on a change in pull request #7369:
URL: https://github.com/apache/tvm/pull/7369#discussion_r566548070
##
File path: src/relay/transforms/memory_alloc.cc
##
@@ -0,0 +1,508 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more
masahi edited a comment on issue #7339:
URL: https://github.com/apache/tvm/issues/7339#issuecomment-769540320
Printing `mod` shows that `inff` is a thing:
```
def @main(%x: Tensor[(3, 4), float32]) {
clip(%x, a_min=0.5f, a_max=inff)
}
```
Do we want to special-case
altanh commented on pull request #7370:
URL: https://github.com/apache/tvm/pull/7370#issuecomment-769572551
BTW just to confirm, this fix should correctly handle the case of `-np.inf`
right? Might be worth adding a test case to be sure.
masahi merged pull request #7367:
URL: https://github.com/apache/tvm/pull/7367
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
This is an automated email from the ASF dual-hosted git repository.
kevinthesun pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git
The following commit(s) were added to refs/heads/main by this push:
new ef032b3 Remove MemoryPlan from VM passes (#7361)
altanh commented on pull request #7361:
URL: https://github.com/apache/tvm/pull/7361#issuecomment-769487509
I'm not sure we have the infra for this, but going forward it would also be
interesting to see the memory usage differences with and without, since this
pass would affect allocation
kevinthesun commented on pull request #7361:
URL: https://github.com/apache/tvm/pull/7361#issuecomment-769487569
Thanks @mbrookhart @masahi @icemelon9 @zhiics
This is an automated message from the Apache Git Service.
To
kevinthesun merged pull request #7361:
URL: https://github.com/apache/tvm/pull/7361
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
jwfromm commented on pull request #7366:
URL: https://github.com/apache/tvm/pull/7366#issuecomment-769510961
Funny that these two very similar PRs landed so close to one another. After
reading both, I would argue in favor of choosing this implementation. I think
its important to be able
jwfromm edited a comment on pull request #7366:
URL: https://github.com/apache/tvm/pull/7366#issuecomment-769510961
Funny that these two very similar PRs landed so close to one another. After
reading both, I would argue in favor of choosing this implementation. I think
its important to be
masahi commented on a change in pull request #7369:
URL: https://github.com/apache/tvm/pull/7369#discussion_r566521265
##
File path: src/relay/transforms/memory_alloc.cc
##
@@ -0,0 +1,508 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more
zhiics commented on a change in pull request #7369:
URL: https://github.com/apache/tvm/pull/7369#discussion_r566547639
##
File path: src/relay/transforms/memory_alloc.cc
##
@@ -0,0 +1,508 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more
masahi commented on issue #7339:
URL: https://github.com/apache/tvm/issues/7339#issuecomment-769540320
Printing `mod` shows that `inff` is a thing:
```
def @main(%x: Tensor[(3, 4), float32]) {
clip(%x, a_min=0.5f, a_max=inff)
}
```
Do we want to special-case `inf` in
masahi commented on a change in pull request #7369:
URL: https://github.com/apache/tvm/pull/7369#discussion_r566550208
##
File path: src/relay/transforms/memory_alloc.cc
##
@@ -0,0 +1,508 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more
masahi commented on pull request #7369:
URL: https://github.com/apache/tvm/pull/7369#issuecomment-769553313
@zhiics FYI compiling MaskRCNN with C++ `ManifestAlloc` generates the
following warning that I didn't see in the python version. I don't think it
matters, just to let you know if
masahi commented on a change in pull request #7370:
URL: https://github.com/apache/tvm/pull/7370#discussion_r566597304
##
File path: src/parser/tokenizer.h
##
@@ -521,6 +521,12 @@ struct Tokenizer {
}
auto span = SpanFrom(line, col);
+
+ if (keyword ==
mbrookhart commented on pull request #7361:
URL: https://github.com/apache/tvm/pull/7361#issuecomment-769481760
Okay for TF-SSD, I timed 10 calls to vm.invoke after all of the compilation
happened and averaged them, running on a Ryzen 5950x
main:
34.41 ms/iteration
masahi commented on a change in pull request #7369:
URL: https://github.com/apache/tvm/pull/7369#discussion_r566519024
##
File path: src/relay/transforms/memory_alloc.cc
##
@@ -0,0 +1,508 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more
altanh commented on issue #7339:
URL: https://github.com/apache/tvm/issues/7339#issuecomment-769541879
Not sure if Python should be the source of truth but `float("inf")` returns
`inf` whereas `float("inff")` fails. Also numpy seems to have similar behavior
(but they somehow avoid
kevinthesun commented on pull request #7361:
URL: https://github.com/apache/tvm/pull/7361#issuecomment-769296390
cc @zhiics @icemelon9
This is an automated message from the Apache Git Service.
To respond to the message,
masahi commented on pull request #7361:
URL: https://github.com/apache/tvm/pull/7361#issuecomment-769298663
Can we simply use the pass infra to disable it, like
```
with tvm.transform.PassContext(opt_level=3, disabled_pass=["MemoryPlan"]):
...
```
comaniac commented on a change in pull request #7359:
URL: https://github.com/apache/tvm/pull/7359#discussion_r566306351
##
File path: python/tvm/driver/tvmc/common.py
##
@@ -136,3 +138,40 @@ def tracker_host_port_from_cli(rpc_tracker_str):
logger.info("RPC tracker
masahi commented on issue #7339:
URL: https://github.com/apache/tvm/issues/7339#issuecomment-769299290
I'm interested in taking this up, @altanh can you post a repro script?
This is an automated message from the Apache Git
This is an automated email from the ASF dual-hosted git repository.
jroesch pushed a change to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git.
from ae9dcb0 switch to python script for expanding globs
add c0f46b6 revert attempt to use globs in
1 - 100 of 133 matches
Mail list logo