[GitHub] [tvm] insop commented on a change in pull request #7230: [FRONTEND][Mxnet][nuymp] Adding _npi_advanced_indexing_multiple

2021-01-08 Thread GitBox
insop commented on a change in pull request #7230: URL: https://github.com/apache/tvm/pull/7230#discussion_r553766990 ## File path: tests/python/frontend/mxnet/test_forward.py ## @@ -1935,6 +1935,29 @@ def verify(data_shape, axis, use_length, length): verify((2, 3, 4), 2,

[GitHub] [tvm] codeislife99 opened a new pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
codeislife99 opened a new pull request #7231: URL: https://github.com/apache/tvm/pull/7231 THis PR adds the above ops to PT Frontend to support customer models. This is an automated message from the Apache Git Service. To r

[GitHub] [tvm] codeislife99 commented on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
codeislife99 commented on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-756674827 cc: @anijain2305 @trevor-m This is an automated message from the Apache Git Service. To respond to the message

[GitHub] [tvm] masahi commented on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
masahi commented on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-756693366 cc @t-vi @yongwww This is probably not how we should support `copy_`. We discussed this issue before. Later I found a promising way to support inplace copy: We first co

[GitHub] [tvm] masahi edited a comment on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
masahi edited a comment on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-756693366 cc @t-vi @yongwww This is probably not how we should support `copy_`. We discussed this issue before. Later I found a promising way to support inplace copy/assig

[GitHub] [tvm] d-smirnov commented on pull request #7113: [TFLite] Quantized version of unit test for Dense

2021-01-08 Thread GitBox
d-smirnov commented on pull request #7113: URL: https://github.com/apache/tvm/pull/7113#issuecomment-756695634 @siju-samuel @FrozenGene bump This is an automated message from the Apache Git Service. To respond to the mess

[GitHub] [tvm] masahi edited a comment on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
masahi edited a comment on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-756693366 cc @t-vi @yongwww This is probably not how we should support `copy_`. We discussed this issue before. Later I found a promising way to support inplace copy/assig

[GitHub] [tvm] t-vi commented on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
t-vi commented on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-756705288 @masahi can you elaborate a bit how you want to do that? So to my mind there are two parts to this. For slice assignments ```python @torch.jit.script def foo(x): x[:

[GitHub] [tvm] t-vi edited a comment on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
t-vi edited a comment on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-756705288 @masahi can you elaborate a bit how you want to do that? So to my mind there are two parts to this. For slice assignments ```python @torch.jit.script def foo(x):

[GitHub] [tvm] masahi commented on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
masahi commented on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-756716845 For the first case, ``` def foo(x): x[:5] = 0 return 2 * x ``` Using the two passes I mentioned, I get this graph: ``` graph(%x : Float(10:1, requir

[GitHub] [tvm] masahi edited a comment on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
masahi edited a comment on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-756716845 For the first case, ``` def foo(x): x[:5] = 0 return 2 * x ``` Using the two passes I mentioned, I get this graph: ``` graph(%x : Float(10:1,

[GitHub] [tvm] t-vi commented on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
t-vi commented on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-756717849 Maybe they tried to take the same matching shortcut internally... This is an automated message from the Apache Git Serv

[GitHub] [tvm] masahi commented on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
masahi commented on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-756720013 I experimented with this approach to support inplace update in torchvision faster rcnn / mask rcnn: https://github.com/pytorch/vision/blob/6315358dd06e3a2bcbe9c1e8cdaa10898ac2

[GitHub] [tvm] masahi edited a comment on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
masahi edited a comment on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-756720013 I experimented with this approach to support inplace update in torchvision faster rcnn / mask rcnn: https://github.com/pytorch/vision/blob/6315358dd06e3a2bcbe9c1e8cdaa1

[GitHub] [tvm] tqchen merged pull request #7228: [CI] make sure submodule checkout in clean state

2021-01-08 Thread GitBox
tqchen merged pull request #7228: URL: https://github.com/apache/tvm/pull/7228 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the s

[tvm] branch main updated: [CI] make sure submodule checkout in clean state (#7228)

2021-01-08 Thread tqchen
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git The following commit(s) were added to refs/heads/main by this push: new 29da763 [CI] make sure submodule checkout in clean sta

[GitHub] [tvm] t-vi commented on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
t-vi commented on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-756764112 I don't want to ruin the party, but does `unsqueeze_` work as is? We would want to update future use of the input to refer to the output (same for any of the "simple" inplace). I

[GitHub] [tvm] t-vi edited a comment on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
t-vi edited a comment on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-756764112 I don't want to ruin the party, but does `unsqueeze_` work as is? We would want to update future use of the input to refer to the output (same for any of the "simple" inplace)

[GitHub] [tvm] d-smirnov commented on a change in pull request #7206: [BYOC][ACL] Depthwise convolution support

2021-01-08 Thread GitBox
d-smirnov commented on a change in pull request #7206: URL: https://github.com/apache/tvm/pull/7206#discussion_r553995726 ## File path: python/tvm/relay/op/contrib/arm_compute_lib.py ## @@ -19,12 +19,15 @@ import numpy as np import tvm +import tvm._ffi Review comment:

[GitHub] [tvm] d-smirnov commented on a change in pull request #7206: [BYOC][ACL] Depthwise convolution support

2021-01-08 Thread GitBox
d-smirnov commented on a change in pull request #7206: URL: https://github.com/apache/tvm/pull/7206#discussion_r553995853 ## File path: python/tvm/relay/op/contrib/arm_compute_lib.py ## @@ -71,6 +74,61 @@ def partition_for_arm_compute_lib(mod, params=None): return seq(mod)

[GitHub] [tvm] d-smirnov commented on a change in pull request #7206: [BYOC][ACL] Depthwise convolution support

2021-01-08 Thread GitBox
d-smirnov commented on a change in pull request #7206: URL: https://github.com/apache/tvm/pull/7206#discussion_r553996079 ## File path: python/tvm/relay/op/contrib/arm_compute_lib.py ## @@ -71,6 +74,61 @@ def partition_for_arm_compute_lib(mod, params=None): return seq(mod)

[GitHub] [tvm] d-smirnov commented on a change in pull request #7206: [BYOC][ACL] Depthwise convolution support

2021-01-08 Thread GitBox
d-smirnov commented on a change in pull request #7206: URL: https://github.com/apache/tvm/pull/7206#discussion_r553997004 ## File path: src/relay/backend/contrib/arm_compute_lib/codegen.cc ## @@ -126,7 +127,7 @@ class ACLJSONSerializer : public backend::contrib::JSONSerializer

[GitHub] [tvm] lhutton1 commented on a change in pull request #7206: [BYOC][ACL] Depthwise convolution support

2021-01-08 Thread GitBox
lhutton1 commented on a change in pull request #7206: URL: https://github.com/apache/tvm/pull/7206#discussion_r554048506 ## File path: src/runtime/contrib/arm_compute_lib/acl_runtime.cc ## @@ -269,6 +268,64 @@ class ACLRuntime : public JSONRuntimeBase { layer->function = f

[GitHub] [tvm] mbrookhart commented on pull request #7123: Parallelize cumsum in get_valid_counts

2021-01-08 Thread GitBox
mbrookhart commented on pull request #7123: URL: https://github.com/apache/tvm/pull/7123#issuecomment-756871097 For posperity, @trevor-m and I did some offline debugging yesterday, and #7229 seems to fix the issue. This is a

[GitHub] [tvm] comaniac merged pull request #7226: [AutoScheduler] Do not return naive schedule in tracing mode

2021-01-08 Thread GitBox
comaniac merged pull request #7226: URL: https://github.com/apache/tvm/pull/7226 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the

[tvm] branch main updated (29da763 -> 54c995d)

2021-01-08 Thread comaniac
This is an automated email from the ASF dual-hosted git repository. comaniac pushed a change to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git. from 29da763 [CI] make sure submodule checkout in clean state (#7228) add 54c995d [AutoScheduler] Do not return naive

[GitHub] [tvm] comaniac commented on pull request #7226: [AutoScheduler] Do not return naive schedule in tracing mode

2021-01-08 Thread GitBox
comaniac commented on pull request #7226: URL: https://github.com/apache/tvm/pull/7226#issuecomment-756896071 Thanks @jcf94 @merrymercy This is an automated message from the Apache Git Service. To respond to the message, ple

[GitHub] [tvm] aidamanzano commented on issue #3545: Make failed: CheckSymbolExists.c:2:10: fatal error: OpenCL_INCLUDE_DIR-NOTFOUND/CL/cl.h: No such file or directory

2021-01-08 Thread GitBox
aidamanzano commented on issue #3545: URL: https://github.com/apache/tvm/issues/3545#issuecomment-756903730 Hello I have checked the discussion on the website[ here](https://discuss.tvm.apache.org/t/error-tvm-build-cmakefiles-cmaketmp-checksymbolexists-c10-fatal-error-opencl-include-dir-not

[GitHub] [tvm] codeislife99 commented on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
codeislife99 commented on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-756929498 Thanks for the detailed discussion and uncovering some things which I didn't know about. I will remove copy_ from this PR since its not as trivial as I thought it would be.

[GitHub] [tvm] codeislife99 edited a comment on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
codeislife99 edited a comment on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-756929498 Thanks for the detailed discussion and uncovering some things which I didn't know about. I will remove `copy_` from this PR since its not as trivial as I thought it woul

[GitHub] [tvm] tqchen commented on pull request #7083: [RELAY,TOPI] Threefry PRNG: splittable and stateless

2021-01-08 Thread GitBox
tqchen commented on pull request #7083: URL: https://github.com/apache/tvm/pull/7083#issuecomment-756939901 also cc @yzhliu @hzfan @comaniac who might be interested This is an automated message from the Apache Git Service. To

[GitHub] [tvm] ptrendx commented on pull request #6758: More CHECK to ICHECK

2021-01-08 Thread GitBox
ptrendx commented on pull request #6758: URL: https://github.com/apache/tvm/pull/6758#issuecomment-756960926 I don't think change to the NNVM directory is right - NNVM does not rely on the rest of TVM so it does not know about `ICHECK` macros and compiling it results in a bunch of `not dec

[GitHub] [tvm] electriclilies commented on a change in pull request #7083: [RELAY,TOPI] Threefry PRNG: splittable and stateless

2021-01-08 Thread GitBox
electriclilies commented on a change in pull request #7083: URL: https://github.com/apache/tvm/pull/7083#discussion_r553580601 ## File path: python/tvm/relay/op/random/kernel.py ## @@ -0,0 +1,134 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contri

[GitHub] [tvm] tkonolige commented on a change in pull request #7083: [RELAY,TOPI] Threefry PRNG: splittable and stateless

2021-01-08 Thread GitBox
tkonolige commented on a change in pull request #7083: URL: https://github.com/apache/tvm/pull/7083#discussion_r554176881 ## File path: src/relay/op/random/kernel.cc ## @@ -0,0 +1,89 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor l

[GitHub] [tvm] tkonolige commented on a change in pull request #7083: [RELAY,TOPI] Threefry PRNG: splittable and stateless

2021-01-08 Thread GitBox
tkonolige commented on a change in pull request #7083: URL: https://github.com/apache/tvm/pull/7083#discussion_r554177250 ## File path: python/tvm/relay/op/random/kernel.py ## @@ -0,0 +1,134 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor

[GitHub] [tvm] tkonolige commented on a change in pull request #7083: [RELAY,TOPI] Threefry PRNG: splittable and stateless

2021-01-08 Thread GitBox
tkonolige commented on a change in pull request #7083: URL: https://github.com/apache/tvm/pull/7083#discussion_r554178005 ## File path: python/tvm/relay/op/random/kernel.py ## @@ -0,0 +1,134 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor

[GitHub] [tvm] comaniac opened a new pull request #7232: Fix const array

2021-01-08 Thread GitBox
comaniac opened a new pull request #7232: URL: https://github.com/apache/tvm/pull/7232 #7018 introduces the dynamic support to `strided_slice`. However, in the following case, the static workload will be treat as a dynamic: ``` cpp.strided_slice(weight, begin=[0, 0, 0, 0], end=[No

[GitHub] [tvm] tqchen commented on pull request #6758: More CHECK to ICHECK

2021-01-08 Thread GitBox
tqchen commented on pull request #6758: URL: https://github.com/apache/tvm/pull/6758#issuecomment-756990130 @ptrendx you are right, please send another PR to revert the change in NNVM part This is an automated message from t

[GitHub] [tvm] masahi opened a new pull request #7233: [TOPI] Minor perf improvement for GPU scatter

2021-01-08 Thread GitBox
masahi opened a new pull request #7233: URL: https://github.com/apache/tvm/pull/7233 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to

[GitHub] [tvm] ymwangg opened a new pull request #7234: [AutoTVM] Add index boundary check in ConfigSpace.get()

2021-01-08 Thread GitBox
ymwangg opened a new pull request #7234: URL: https://github.com/apache/tvm/pull/7234 Currently `autotvm.task.ConfigSpace.get()` can take any number. This PR adds boundary check as well as fixing some typos. This is an a

[GitHub] [tvm] tkonolige commented on a change in pull request #7083: [RELAY,TOPI] Threefry PRNG: splittable and stateless

2021-01-08 Thread GitBox
tkonolige commented on a change in pull request #7083: URL: https://github.com/apache/tvm/pull/7083#discussion_r554185452 ## File path: python/tvm/topi/random/kernel.py ## @@ -0,0 +1,408 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor lic

[GitHub] [tvm] d-smirnov commented on a change in pull request #7206: [BYOC][ACL] Depthwise convolution support

2021-01-08 Thread GitBox
d-smirnov commented on a change in pull request #7206: URL: https://github.com/apache/tvm/pull/7206#discussion_r554187386 ## File path: src/runtime/contrib/arm_compute_lib/acl_runtime.cc ## @@ -269,6 +268,64 @@ class ACLRuntime : public JSONRuntimeBase { layer->function =

[GitHub] [tvm] tkonolige commented on pull request #7233: [TOPI] Minor perf improvement for GPU scatter

2021-01-08 Thread GitBox
tkonolige commented on pull request #7233: URL: https://github.com/apache/tvm/pull/7233#issuecomment-757000179 Would it be a better idea to have to separate scatter implementations (the parallel one and the sequential one) and let autotvm figure out which is better? Then we don't have to h

[GitHub] [tvm] tkonolige edited a comment on pull request #7233: [TOPI] Minor perf improvement for GPU scatter

2021-01-08 Thread GitBox
tkonolige edited a comment on pull request #7233: URL: https://github.com/apache/tvm/pull/7233#issuecomment-757000179 Would it be a better idea to have to separate scatter implementations (the parallel one and the sequential one) and let autotvm figure out which is better? Then we don't ha

[GitHub] [tvm] mbrookhart commented on pull request #7232: [TOPI] Treat undefined elements as constants in Array

2021-01-08 Thread GitBox
mbrookhart commented on pull request #7232: URL: https://github.com/apache/tvm/pull/7232#issuecomment-757002649 I didn't know we could use None. Is there a way to test it in the unit tests? I'd like to see a unit test, but otherwise it looks good. -

[GitHub] [tvm] masahi commented on pull request #7233: [TOPI] Minor perf improvement for GPU scatter

2021-01-08 Thread GitBox
masahi commented on pull request #7233: URL: https://github.com/apache/tvm/pull/7233#issuecomment-757006403 The second text block is an excerpt from the output of `nvprof --print-gpu-trace`, showing elapsed time, launch config etc of each kernel executed, in order. I don't have othe

[GitHub] [tvm] masahi edited a comment on pull request #7233: [TOPI] Minor perf improvement for GPU scatter

2021-01-08 Thread GitBox
masahi edited a comment on pull request #7233: URL: https://github.com/apache/tvm/pull/7233#issuecomment-757006403 The second text block is an excerpt from the output of `nvprof --print-gpu-trace`, showing elapsed time, launch config etc of each kernel executed, in order. The first line is

[GitHub] [tvm] mbrookhart commented on a change in pull request #7233: [TOPI] Minor perf improvement for GPU scatter

2021-01-08 Thread GitBox
mbrookhart commented on a change in pull request #7233: URL: https://github.com/apache/tvm/pull/7233#discussion_r554201277 ## File path: python/tvm/topi/cuda/scatter.py ## @@ -312,19 +313,18 @@ def gen_ir_4d(data, indices, updates, axis, out, update_func): out_ptr = ib.bu

[GitHub] [tvm] mbrookhart commented on a change in pull request #7233: [TOPI] Minor perf improvement for GPU scatter

2021-01-08 Thread GitBox
mbrookhart commented on a change in pull request #7233: URL: https://github.com/apache/tvm/pull/7233#discussion_r554201277 ## File path: python/tvm/topi/cuda/scatter.py ## @@ -312,19 +313,18 @@ def gen_ir_4d(data, indices, updates, axis, out, update_func): out_ptr = ib.bu

[GitHub] [tvm] masahi edited a comment on pull request #7233: [TOPI] Minor perf improvement for GPU scatter

2021-01-08 Thread GitBox
masahi edited a comment on pull request #7233: URL: https://github.com/apache/tvm/pull/7233#issuecomment-757006403 The second text block is an excerpt from the output of `nvprof --print-gpu-trace`, showing elapsed time, launch config etc of each kernel executed, in order. The first line is

[GitHub] [tvm] masahi edited a comment on pull request #7233: [TOPI] Minor perf improvement for GPU scatter

2021-01-08 Thread GitBox
masahi edited a comment on pull request #7233: URL: https://github.com/apache/tvm/pull/7233#issuecomment-757006403 The second text block is an excerpt from the output of `nvprof --print-gpu-trace`, showing elapsed time, launch config etc of each kernel executed, in order. The first line is

[GitHub] [tvm] masahi edited a comment on pull request #7233: [TOPI] Minor perf improvement for GPU scatter

2021-01-08 Thread GitBox
masahi edited a comment on pull request #7233: URL: https://github.com/apache/tvm/pull/7233#issuecomment-757006403 The second text block is an excerpt from the output of `nvprof --print-gpu-trace`, showing elapsed time, launch config etc of each kernel executed, in order. The first line is

[GitHub] [tvm] masahi edited a comment on pull request #7233: [TOPI] Minor perf improvement for GPU scatter

2021-01-08 Thread GitBox
masahi edited a comment on pull request #7233: URL: https://github.com/apache/tvm/pull/7233#issuecomment-757006403 The second text block is an excerpt from the output of `nvprof --print-gpu-trace`, showing elapsed time, launch config etc of each kernel executed, in order. The first line is

[GitHub] [tvm] tkonolige commented on pull request #7233: [TOPI] Minor perf improvement for GPU scatter

2021-01-08 Thread GitBox
tkonolige commented on pull request #7233: URL: https://github.com/apache/tvm/pull/7233#issuecomment-757009645 > hmm, this sounds better than picking a random threshold, but do we have existing uses of autotvm to make such decision? Given that scatter kernels are extern, I'm not sure if au

[GitHub] [tvm] masahi commented on a change in pull request #7233: [TOPI] Minor perf improvement for GPU scatter

2021-01-08 Thread GitBox
masahi commented on a change in pull request #7233: URL: https://github.com/apache/tvm/pull/7233#discussion_r554204661 ## File path: python/tvm/topi/cuda/scatter.py ## @@ -312,19 +313,18 @@ def gen_ir_4d(data, indices, updates, axis, out, update_func): out_ptr = ib.buffer

[GitHub] [tvm] tkonolige opened a new pull request #7235: [FIX] Fix make format to work with arbitrary upstream names

2021-01-08 Thread GitBox
tkonolige opened a new pull request #7235: URL: https://github.com/apache/tvm/pull/7235 make format hardcoded the upstream branch to origin/main. This patch changes it to ask git for the upstream branch. @jroesch @tqchen

[GitHub] [tvm] tkonolige commented on a change in pull request #7233: [TOPI] Minor perf improvement for GPU scatter

2021-01-08 Thread GitBox
tkonolige commented on a change in pull request #7233: URL: https://github.com/apache/tvm/pull/7233#discussion_r554205518 ## File path: python/tvm/topi/cuda/scatter.py ## @@ -312,19 +313,18 @@ def gen_ir_4d(data, indices, updates, axis, out, update_func): out_ptr = ib.buf

[GitHub] [tvm] comaniac commented on pull request #7232: [TOPI] Treat undefined elements as constants in Array

2021-01-08 Thread GitBox
comaniac commented on pull request #7232: URL: https://github.com/apache/tvm/pull/7232#issuecomment-757011786 > I didn't know we could use None. Is there a way to test it in the unit tests? > > I'd like to see a unit test, but otherwise it looks good. It is supported here: ht

[GitHub] [tvm] masahi commented on pull request #7233: [TOPI] Minor perf improvement for GPU scatter

2021-01-08 Thread GitBox
masahi commented on pull request #7233: URL: https://github.com/apache/tvm/pull/7233#issuecomment-757016060 Yes, there are 4 calls to 4D scatter in MaskRCNN, the old kernel was taking 11.6 milli seconds on them in total, making it one of the bottlenecks as shown in the profile above. This

[GitHub] [tvm] masahi commented on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
masahi commented on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-757023959 I'm also not sure what inplace unsqueeze, or more generally how well we are doing with other inplace ops. @codeislife99 Does this change work for your real model ?

[GitHub] [tvm] masahi edited a comment on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
masahi edited a comment on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-757023959 I'm also not sure if inplace unsqueeze can be supported without any concern, or more generally how well we are doing with other inplace ops. @codeislife99 Does this change wor

[GitHub] [tvm] masahi edited a comment on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
masahi edited a comment on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-757023959 I'm also not sure if inplace unsqueeze can be supported without any concern, or more generally how well we are doing with other inplace ops. @codeislife99 Does this change wor

[GitHub] [tvm] masahi commented on pull request #7233: [TOPI] Minor perf improvement for GPU scatter

2021-01-08 Thread GitBox
masahi commented on pull request #7233: URL: https://github.com/apache/tvm/pull/7233#issuecomment-757032109 > Autotvm does this for external libraries which are all extern, so it will work here I like the idea of separating sorting based implementation of scatter, so I want to try t

[GitHub] [tvm] masahi edited a comment on pull request #7233: [TOPI] Minor perf improvement for GPU scatter

2021-01-08 Thread GitBox
masahi edited a comment on pull request #7233: URL: https://github.com/apache/tvm/pull/7233#issuecomment-757032109 > Autotvm does this for external libraries which are all extern, so it will work here @tkonolige I like the idea of separating sorting based implementation of scatter,

[GitHub] [tvm] tqchen commented on pull request #7235: [FIX] Fix make format to work with arbitrary upstream names

2021-01-08 Thread GitBox
tqchen commented on pull request #7235: URL: https://github.com/apache/tvm/pull/7235#issuecomment-757032872 @tkonolige wonder what the behavior looks like in the case of two upstreams. e.g. one that points to the fork and another points to the upstream. ---

[GitHub] [tvm] t-vi commented on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
t-vi commented on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-757036217 @masahi , @codeislife99 : I mentioned this to one of the other PyTorch devs and he mentioned that there is a remove mutation pass in PyTorch that should take care of the cases where th

[GitHub] [tvm] tkonolige commented on pull request #7233: [TOPI] Minor perf improvement for GPU scatter

2021-01-08 Thread GitBox
tkonolige commented on pull request #7233: URL: https://github.com/apache/tvm/pull/7233#issuecomment-757037556 @masahi Here is an example of having multiple implementations for the same op, with some of them being external. https://github.com/apache/tvm/blob/main/python/tvm/relay/op/strate

[GitHub] [tvm] tkonolige edited a comment on pull request #7233: [TOPI] Minor perf improvement for GPU scatter

2021-01-08 Thread GitBox
tkonolige edited a comment on pull request #7233: URL: https://github.com/apache/tvm/pull/7233#issuecomment-757037556 @masahi Here is an example of having multiple implementations for the same op, with some of them being external. https://github.com/apache/tvm/blob/main/python/tvm/relay/op

[GitHub] [tvm] yzhliu commented on a change in pull request #6370: [TOPI] Add einsum operator

2021-01-08 Thread GitBox
yzhliu commented on a change in pull request #6370: URL: https://github.com/apache/tvm/pull/6370#discussion_r554231366 ## File path: include/tvm/topi/einsum.h ## @@ -0,0 +1,930 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license

[GitHub] [tvm] yzhliu commented on pull request #6370: [TOPI] Add einsum operator

2021-01-08 Thread GitBox
yzhliu commented on pull request #6370: URL: https://github.com/apache/tvm/pull/6370#issuecomment-757044548 pls also rebase and retrigger the ci. This is an automated message from the Apache Git Service. To respond to the mes

[GitHub] [tvm] tkonolige commented on pull request #7235: [FIX] Fix make format to work with arbitrary upstream names

2021-01-08 Thread GitBox
tkonolige commented on pull request #7235: URL: https://github.com/apache/tvm/pull/7235#issuecomment-757045420 I don't think you can have two upstreams? What I originally had was incorrect. My `main` branch is tracking `apache/tvm` instead of my local repository, which is why it work

[GitHub] [tvm] merrymercy commented on pull request #7175: [AutoTVM-FIX] avoid unexpected value(1) of search space when get length for uninitiated search space

2021-01-08 Thread GitBox
merrymercy commented on pull request #7175: URL: https://github.com/apache/tvm/pull/7175#issuecomment-757051844 This PR broke the layout_transform template used in graph tuner. https://github.com/apache/tvm/blob/54c995dbf7c96c1184c2baf64de87bc9566fe33a/python/tvm/autotvm/graph_tuner/base

[GitHub] [tvm] merrymercy edited a comment on pull request #7175: [AutoTVM-FIX] avoid unexpected value(1) of search space when get length for uninitiated search space

2021-01-08 Thread GitBox
merrymercy edited a comment on pull request #7175: URL: https://github.com/apache/tvm/pull/7175#issuecomment-757051844 This PR broke the layout_transform template used in graph tuner. https://github.com/apache/tvm/blob/54c995dbf7c96c1184c2baf64de87bc9566fe33a/python/tvm/autotvm/graph_tun

[GitHub] [tvm] merrymercy edited a comment on pull request #7175: [AutoTVM-FIX] avoid unexpected value(1) of search space when get length for uninitiated search space

2021-01-08 Thread GitBox
merrymercy edited a comment on pull request #7175: URL: https://github.com/apache/tvm/pull/7175#issuecomment-757051844 This PR broke the layout_transform template used in graph tuner. https://github.com/apache/tvm/blob/54c995dbf7c96c1184c2baf64de87bc9566fe33a/python/tvm/autotvm/graph_tun

[tvm] branch revert-7175-main created (now afa896a)

2021-01-08 Thread lmzheng
This is an automated email from the ASF dual-hosted git repository. lmzheng pushed a change to branch revert-7175-main in repository https://gitbox.apache.org/repos/asf/tvm.git. at afa896a Revert "[AutoTVM-FIX] avoid unexpected value(1) of search space when get length for uninitiated sea

[tvm] 01/01: Revert "[AutoTVM-FIX] avoid unexpected value(1) of search space when get length for uninitiated search space (#7175)"

2021-01-08 Thread lmzheng
This is an automated email from the ASF dual-hosted git repository. lmzheng pushed a commit to branch revert-7175-main in repository https://gitbox.apache.org/repos/asf/tvm.git commit afa896a72f413b1556daa764c13a75b5601bca61 Author: Lianmin Zheng AuthorDate: Fri Jan 8 15:50:44 2021 -0800 Re

[GitHub] [tvm] merrymercy opened a new pull request #7236: Revert "[AutoTVM-FIX] avoid unexpected value(1) of search space when get length for uninitiated search space"

2021-01-08 Thread GitBox
merrymercy opened a new pull request #7236: URL: https://github.com/apache/tvm/pull/7236 Reverts apache/tvm#7175 because it breaks the graph tuner. see comments https://github.com/apache/tvm/pull/7175#issuecomment-757051844 ---

[GitHub] [tvm] tqchen commented on pull request #7235: [FIX] Fix make format to work with arbitrary upstream names

2021-01-08 Thread GitBox
tqchen commented on pull request #7235: URL: https://github.com/apache/tvm/pull/7235#issuecomment-757053449 Given that format can be done simply via `./tests/lint/git-clang-format.sh upstream/main`. (where upstream points to the apache/tvm) I feel it is not necessary to introduce the

[GitHub] [tvm] tqchen edited a comment on pull request #7235: [FIX] Fix make format to work with arbitrary upstream names

2021-01-08 Thread GitBox
tqchen edited a comment on pull request #7235: URL: https://github.com/apache/tvm/pull/7235#issuecomment-757053449 Given that format can be done simply via `./tests/lint/git-clang-format.sh upstream/main`. (where upstream points to the apache/tvm) I feel it is not necessary to introd

[GitHub] [tvm] tkonolige commented on pull request #7235: [FIX] Fix make format to work with arbitrary upstream names

2021-01-08 Thread GitBox
tkonolige commented on pull request #7235: URL: https://github.com/apache/tvm/pull/7235#issuecomment-757055740 `./tests/lint/git-clang-format.sh upstream/main` is incorrect if you are behind main. Also some people have origin point to apache/tvm and some have it pointing to their fork. Eve

[GitHub] [tvm] tqchen commented on pull request #7235: [FIX] Fix make format to work with arbitrary upstream names

2021-01-08 Thread GitBox
tqchen commented on pull request #7235: URL: https://github.com/apache/tvm/pull/7235#issuecomment-757059177 The flow that set an upstream option has been working well so far: ``` git upstream add upstream https://github.com/apache/tvm ./tests/lint/git-clang-format.sh -i upstream

[GitHub] [tvm] merrymercy commented on pull request #7070: Add autoscheduler support to tvmc

2021-01-08 Thread GitBox
merrymercy commented on pull request #7070: URL: https://github.com/apache/tvm/pull/7070#issuecomment-757059470 For x86 cpu, it is recommended to use `enable_cpu_cache_flush=True` in `LocalRunner`, as shown in the tutorial. https://github.com/apache/tvm/blob/54c995dbf7c96c1184c2baf64de8

[GitHub] [tvm] tqchen edited a comment on pull request #7235: [FIX] Fix make format to work with arbitrary upstream names

2021-01-08 Thread GitBox
tqchen edited a comment on pull request #7235: URL: https://github.com/apache/tvm/pull/7235#issuecomment-757059177 The flow that set an upstream option has been working well so far: Setup upstream ``` git upstream add upstream https://github.com/apache/tvm ``` format

[GitHub] [tvm] tkonolige commented on pull request #7235: [FIX] Fix make format to work with arbitrary upstream names

2021-01-08 Thread GitBox
tkonolige commented on pull request #7235: URL: https://github.com/apache/tvm/pull/7235#issuecomment-757059633 Seems like half the places use upstream and half use origin. This is an automated message from the Apache Git Serv

[GitHub] [tvm] tqchen commented on pull request #7235: [FIX] Fix make format to work with arbitrary upstream names

2021-01-08 Thread GitBox
tqchen commented on pull request #7235: URL: https://github.com/apache/tvm/pull/7235#issuecomment-757060481 This is because of the difference between CI setting and local. In CI, origin points to the `apache/tvm`, so the CI script correctly points to ```origin/main```. When we for

[GitHub] [tvm] tqchen edited a comment on pull request #7235: [FIX] Fix make format to work with arbitrary upstream names

2021-01-08 Thread GitBox
tqchen edited a comment on pull request #7235: URL: https://github.com/apache/tvm/pull/7235#issuecomment-757060481 This is because of the difference between CI setting and local. In CI, origin points to the `apache/tvm` and there is no upstream being setup, so the CI script correctly point

[GitHub] [tvm] tkonolige commented on pull request #7235: [FIX] Fix make format to work with arbitrary upstream names

2021-01-08 Thread GitBox
tkonolige commented on pull request #7235: URL: https://github.com/apache/tvm/pull/7235#issuecomment-757062047 We should definitely use merge-base. I've fine hardcoding upstream/main if it is the recommended way in the docs.

[GitHub] [tvm] masahi commented on pull request #7231: Adding aten::unsqueeze_ and aten::copy_ ops to PT Frontend

2021-01-08 Thread GitBox
masahi commented on pull request #7231: URL: https://github.com/apache/tvm/pull/7231#issuecomment-757065521 Interesting, if that can convert `unsqueeze_` to `unsqueeze` when Torch can prove it safe, that works great for us. @codeislife99 Do you want to try that? --

[GitHub] [tvm] tmoreau89 merged pull request #7083: [RELAY,TOPI] Threefry PRNG: splittable and stateless

2021-01-08 Thread GitBox
tmoreau89 merged pull request #7083: URL: https://github.com/apache/tvm/pull/7083 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to th

[GitHub] [tvm] tmoreau89 closed issue #6813: [RELAY] Support Random Number Generator

2021-01-08 Thread GitBox
tmoreau89 closed issue #6813: URL: https://github.com/apache/tvm/issues/6813 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the spe

[tvm] branch main updated: [RELAY, TOPI] Threefry PRNG: splittable and stateless (#7083)

2021-01-08 Thread moreau
This is an automated email from the ASF dual-hosted git repository. moreau pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git The following commit(s) were added to refs/heads/main by this push: new 701bcc2 [RELAY,TOPI] Threefry PRNG: splittable and sta

[GitHub] [tvm] tmoreau89 commented on pull request #7083: [RELAY,TOPI] Threefry PRNG: splittable and stateless

2021-01-08 Thread GitBox
tmoreau89 commented on pull request #7083: URL: https://github.com/apache/tvm/pull/7083#issuecomment-757077540 Thank you @altanh @electriclilies @tqchen @jwfromm @MarisaKirisame for the reviews, the PR has been merged. This

[GitHub] [tvm] tqchen commented on pull request #7235: [FIX] Fix make format to work with arbitrary upstream names

2021-01-08 Thread GitBox
tqchen commented on pull request #7235: URL: https://github.com/apache/tvm/pull/7235#issuecomment-757078648 can you elaborate a bit? I think that really depends on how the git-clang-format is implemented. I ran a quick test on a branch that falls behind main and apply another commit

[GitHub] [tvm] tqchen edited a comment on pull request #7235: [FIX] Fix make format to work with arbitrary upstream names

2021-01-08 Thread GitBox
tqchen edited a comment on pull request #7235: URL: https://github.com/apache/tvm/pull/7235#issuecomment-757078648 can you elaborate a bit? I think that really depends on how the git-clang-format is implemented. If it runs on the diff of the current commit and the base, then seems we shoul

[GitHub] [tvm] tqchen commented on a change in pull request #7084: [TIR] Support Return in TIR

2021-01-08 Thread GitBox
tqchen commented on a change in pull request #7084: URL: https://github.com/apache/tvm/pull/7084#discussion_r554276298 ## File path: src/tir/transforms/make_packed_api.cc ## @@ -41,6 +41,56 @@ namespace tvm { namespace tir { +class ReturnRewriter : public StmtMutator { + pu

[GitHub] [tvm] tqchen commented on a change in pull request #7084: [TIR] Support Return in TIR

2021-01-08 Thread GitBox
tqchen commented on a change in pull request #7084: URL: https://github.com/apache/tvm/pull/7084#discussion_r554276298 ## File path: src/tir/transforms/make_packed_api.cc ## @@ -41,6 +41,56 @@ namespace tvm { namespace tir { +class ReturnRewriter : public StmtMutator { + pu

[GitHub] [tvm] lixiaoquan opened a new pull request #7237: [ONNX] Fix issues for Clip and RoiAlign

2021-01-08 Thread GitBox
lixiaoquan opened a new pull request #7237: URL: https://github.com/apache/tvm/pull/7237 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above

[GitHub] [tvm] comaniac merged pull request #7232: [TOPI] Treat undefined elements as constants in Array

2021-01-08 Thread GitBox
comaniac merged pull request #7232: URL: https://github.com/apache/tvm/pull/7232 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the

[tvm] branch main updated: [TOPI] Treat undefined elements as constants in Array (#7232)

2021-01-08 Thread comaniac
This is an automated email from the ASF dual-hosted git repository. comaniac pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git The following commit(s) were added to refs/heads/main by this push: new 02ef6e6 [TOPI] Treat undefined elements as constants

[GitHub] [tvm] comaniac commented on pull request #7232: [TOPI] Treat undefined elements as constants in Array

2021-01-08 Thread GitBox
comaniac commented on pull request #7232: URL: https://github.com/apache/tvm/pull/7232#issuecomment-757089909 Thanks @mbrookhart @yzhliu This is an automated message from the Apache Git Service. To respond to the message, pl

  1   2   >