[GitHub] [incubator-tvm] lixiaoquan commented on a change in pull request #5797: [MergeComposite] Fix InferType when module contains Prelude
lixiaoquan commented on a change in pull request #5797: URL: https://github.com/apache/incubator-tvm/pull/5797#discussion_r440584386 ## File path: src/relay/transforms/merge_composite.cc ## @@ -36,22 +36,24 @@ namespace tvm { namespace relay { namespace merge_composite { -Function InferType(const Function& expr) { - auto mod = IRModule::FromExpr(expr); +Function InferType(const Function& expr, const IRModule& m) { + IRModule mod(m); + mod->Update(mod->GetGlobalVar("main"), expr); Review comment: > Since now we update the original module with new transformed function, should we update the corresponding function instead of `main`? For now, other functions in module don't call 'main', so it is safe to replace 'main'. If we are infering function which is mutated from 'main', that's just what we want, if we are infering other functions, it will be a duplicated function but it doesn't harm. And, with only a mutated function, it seems we can't find it in original global_var_map_ because we don't know its crossponding global variable's name. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi edited a comment on pull request #5797: [MergeComposite] Fix InferType when module contains Prelude
masahi edited a comment on pull request #5797: URL: https://github.com/apache/incubator-tvm/pull/5797#issuecomment-644515613 PyTorch frontend also uses Prelude. It contains List ADT (as in functional programming) and functions on lists (cons, map, filter, concat, length, nth etc). It allows creating and manipulating dynamic lists. For pytorch frontend, this is necessary for supporting python list append, stacking a list of tensors, for example. Prelude is mostly rewritten in the Relay "language" (file extension with .rly). Relay has a parser for that language which translate text rep to C++ rep, also usable from python. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi edited a comment on pull request #5797: [MergeComposite] Fix InferType when module contains Prelude
masahi edited a comment on pull request #5797: URL: https://github.com/apache/incubator-tvm/pull/5797#issuecomment-644515613 PyTorch frontend also uses Prelude. It contains List ADT (as in functional programming) and functions on lists (cons, map, filter, concat, length, nth etc). It allows creating and manipulating dynamic lists. For pytorch frontend, this is necessary for supporting python list append, for example. Prelude is mostly rewritten in the Relay "language" (file extension with .rly). Relay has a parser for that language which translate text rep to C++ rep, also usable from python. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi edited a comment on pull request #5797: [MergeComposite] Fix InferType when module contains Prelude
masahi edited a comment on pull request #5797: URL: https://github.com/apache/incubator-tvm/pull/5797#issuecomment-644515613 PyTorch frontend also uses Prelude. It contains List ADT (as in functional programming) and functions on lists (cons, map, filter, concat, length, nth etc). It allows creating and manipulating dynamic lists. For pytorch frontend, this is necessary for supporting python list append, for example. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi commented on pull request #5797: [MergeComposite] Fix InferType when module contains Prelude
masahi commented on pull request #5797: URL: https://github.com/apache/incubator-tvm/pull/5797#issuecomment-644515613 PyTorch frontend also uses Prelude. It contains List ADT (as in functional programming) and functions on lists (cons, map, filter, concat, length, nth etc). It allows creating and manipulating dynamic lists. This is necessary for supporting python list append, for example. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm-site] branch asf-site updated: Build at Mon Jun 15 19:24:40 PDT 2020
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 5cfdfb0 Build at Mon Jun 15 19:24:40 PDT 2020 5cfdfb0 is described below commit 5cfdfb04ded62b9a0c5f5ba3281ee5ced3254309 Author: tqchen AuthorDate: Mon Jun 15 19:24:40 2020 -0700 Build at Mon Jun 15 19:24:40 PDT 2020 --- atom.xml | 2 +- community.html | 2 +- rss.xml| 4 ++-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/atom.xml b/atom.xml index 6372acb..d0c73e0 100644 --- a/atom.xml +++ b/atom.xml @@ -4,7 +4,7 @@ TVM https://tvm.apache.org; rel="self"/> https://tvm.apache.org"/> - 2020-06-15T19:19:22-07:00 + 2020-06-15T19:24:37-07:00 https://tvm.apache.org diff --git a/community.html b/community.html index bfd435a..79618ff 100644 --- a/community.html +++ b/community.html @@ -224,7 +224,7 @@ in alphabetical order. - + diff --git a/rss.xml b/rss.xml index 9394341..4a4c908 100644 --- a/rss.xml +++ b/rss.xml @@ -5,8 +5,8 @@ TVM - https://tvm.apache.org https://tvm.apache.org; rel="self" type="application/rss+xml" /> -Mon, 15 Jun 2020 19:19:22 -0700 -Mon, 15 Jun 2020 19:19:22 -0700 +Mon, 15 Jun 2020 19:24:37 -0700 +Mon, 15 Jun 2020 19:24:37 -0700 60
[incubator-tvm-site] branch master updated: change size
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git The following commit(s) were added to refs/heads/master by this push: new 6d83cdb change size 6d83cdb is described below commit 6d83cdb725186096e9da2221fe775b7c62b22656 Author: tqchen AuthorDate: Mon Jun 15 19:24:24 2020 -0700 change size --- community.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/community.md b/community.md index d862754..886fb99 100644 --- a/community.md +++ b/community.md @@ -84,7 +84,7 @@ in alphabetical order. - +
[incubator-tvm-site] branch master updated: Add qualcomm logo
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git The following commit(s) were added to refs/heads/master by this push: new bdd6115 Add qualcomm logo bdd6115 is described below commit bdd6115c5eff4023c5d47caa3c208f30e696c7ed Author: tqchen AuthorDate: Mon Jun 15 19:18:30 2020 -0700 Add qualcomm logo --- community.md| 1 + images/community/qualcommic.png | Bin 0 -> 32799 bytes 2 files changed, 1 insertion(+) diff --git a/community.md b/community.md index e3ef7e1..d862754 100644 --- a/community.md +++ b/community.md @@ -84,6 +84,7 @@ in alphabetical order. + diff --git a/images/community/qualcommic.png b/images/community/qualcommic.png new file mode 100644 index 000..467e122 Binary files /dev/null and b/images/community/qualcommic.png differ
[incubator-tvm-site] branch asf-site updated: Build at Mon Jun 15 19:19:25 PDT 2020
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 2b624f7 Build at Mon Jun 15 19:19:25 PDT 2020 2b624f7 is described below commit 2b624f7439fdd285394c39efef7c41a0defbf972 Author: tqchen AuthorDate: Mon Jun 15 19:19:25 2020 -0700 Build at Mon Jun 15 19:19:25 PDT 2020 --- ...Us-to-TVM-Stack-and-NNVM-Compiler-with-ROCm.html | 16 atom.xml| 18 +- community.html | 1 + images/community/qualcommic.png | Bin 0 -> 32799 bytes rss.xml | 20 ++-- 5 files changed, 28 insertions(+), 27 deletions(-) diff --git a/2017/10/30/Bringing-AMDGPUs-to-TVM-Stack-and-NNVM-Compiler-with-ROCm.html b/2017/10/30/Bringing-AMDGPUs-to-TVM-Stack-and-NNVM-Compiler-with-ROCm.html index 07f0cb6..7d0db87 100644 --- a/2017/10/30/Bringing-AMDGPUs-to-TVM-Stack-and-NNVM-Compiler-with-ROCm.html +++ b/2017/10/30/Bringing-AMDGPUs-to-TVM-Stack-and-NNVM-Compiler-with-ROCm.html @@ -262,13 +262,13 @@ We are starting to look at performance optimization and we expect more improveme You should see something like this: ; ModuleID = 'myadd__kernel0' -source_filename = "myadd__kernel0" +source_filename = "myadd__kernel0" target datalayout = "e-p:32:32-p1:64:64-p2:64:64-p3:32:32-p4:64:64-p5:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64" target triple = "amdgcn-amd-amdhsa-hcc" ; Function Attrs: nounwind -define dllexport amdgpu_kernel void @myadd__kernel0(float addrspace(1)* noalias nocapture, float addrspacedefine dllexport amdgpu_kernel void @myadd__kernel0(float addrspace(1)* noalias entry: %4 = tail call i32 @llvm.amdgcn.workgroup.id.x() %5 = tail call i32 @llvm.amdgcn.workitem.id.x() @@ -288,14 +288,14 @@ We are starting to look at performance optimization and we expect more improveme %10 = add nsw i32 %.pre-phi, %5 %11 = add nsw i32 %.pre-phi, %5 %12 = sext i32 %11 to i64 - %13 = getelementptr inbounds float, float addrspace(1)* %2, i64 %12 - %14 = load float, float addrspace(1)* %13, align 4, !tbaa !2 - %15 = getelementptr inbounds float, float addrspace(1)* %1, i64 %12 - %16 = load float, float addrspace(1)* %15, align 4, !tbaa !6 + %13 = getelementptr inbounds float, float addrspace(1)* %2, i64 %12 + %14 = load float, float addrspace(1)* %13, align 4, %15 = getelementptr inbounds float, float addrspace(1)* %1, i64 %12 + %16 = load float, float addrspace(1)* %15, align 4, %17 = fadd float %14, %16 %18 = sext i32 %10 to i64 - %19 = getelementptr inbounds float, float addrspace(1)* %0, i64 %18 - store float %17, float addrspace(1)* %19, align 4, !tbaa !9 + %19 = getelementptr inbounds float, float addrspace(1)* %0, i64 %18 + store float %17, float addrspace(1)* %19, align 4, !tbaa br label %if_end diff --git a/atom.xml b/atom.xml index 4a1f194..6372acb 100644 --- a/atom.xml +++ b/atom.xml @@ -4,7 +4,7 @@ TVM https://tvm.apache.org; rel="self"/> https://tvm.apache.org"/> - 2020-06-04T09:03:32-07:00 + 2020-06-15T19:19:22-07:00 https://tvm.apache.org @@ -3378,13 +3378,13 @@ We are starting to look at performance optimization and we expect more improveme pYou should see something like this:/p figure class=highlightprecode class=language-llvm data-lang=llvmspan class=c1; ModuleID = 'myadd__kernel0'/span -span class=errsource_filename/span span class=p=/span span class=smyadd__kernel0/span +span class=errsour/spanspan class=kc/spanspan class=erre_filename/span span class=p=/span span class=smyadd__kernel0/span span class=ktarget/span span class=kdatalayout/span span class=p=/span span class=se-p:32:32-p1:64:64-p2:64:64-p3:32:32-p4:64:64-p5:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64/span span class=ktarget/span span class=ktriple/span span class=p=/span span class=samdgcn-amd-amdhsa-hcc/span span class=c1; Function Attrs: nounwind/span -span class=kdefine/span span class=kdllexport/span span class=erramdgpu_kernel/span span class=ktvoid/span span class=vg@myadd__kernel0/spanspan class=p(/spanspan class=ktfloat/span span class=kaddrspace/spanspan class=p(/spanspan class [...] +span class=kdefine/span span class=kdllexport/span span class=erramdgpu_ker/spanspan class=kne/spanspan class=errl/span span class=ktvoid/span span class=vg@myadd__kernel0/spanspan class=p(/spanspan class=ktfloat/span span class=k [...] span class=nlentry:/span span class=nv%4/span span class=p=/span span class=ktail/span span class=kcall/span span class=kti32/span span class=vg@llvm.amdgcn.workgroup.id.x/spanspan
[GitHub] [incubator-tvm] lixiaoquan commented on pull request #5797: [MergeComposite] Fix InferType when module contains Prelude
lixiaoquan commented on pull request #5797: URL: https://github.com/apache/incubator-tvm/pull/5797#issuecomment-644477410 > LGTM > > For my edification, is there a tutorial on what Prelude actually does somewhere? It's not something I've been able to fit into my mental model of TVM yet. TensorFlow frontend generates a module with Prelude. https://github.com/apache/incubator-tvm/blob/7a419718c121164fc260864014e1d0d81f556949/python/tvm/relay/frontend/tensorflow.py#L2729 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] xqdan commented on a change in pull request #5772: [ARITH]add simplify rule for div
xqdan commented on a change in pull request #5772: URL: https://github.com/apache/incubator-tvm/pull/5772#discussion_r440533420 ## File path: src/arith/rewrite_simplify.cc ## @@ -353,6 +353,19 @@ PrimExpr RewriteSimplifier::Impl::VisitExpr_(const SubNode* op) { truncdiv(x + c1, c3) - truncdiv(x, c3), truncdiv(truncmod(x, c3) + c1, c3), CanProveGreaterEqual(x.Eval(), 0) && c1.Eval()->value >= 0 && c3.Eval()->value > 0); +TVM_TRY_REWRITE_IF(truncdiv(x * c1 + c2, c3) - truncdiv(x * c1 + c4, c3), truncdiv(c2 - c4, c3), + c1.Eval()->value >= 0 && c2.Eval()->value > 0 && c3.Eval()->value > 0 && + c4.Eval()->value >= 0 && CanProveGreaterEqual(x.Eval(), 0)); Review comment: Yes, need to add condition: c2-c4>=c3 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] icemelon9 commented on pull request #5821: [AutoTVM] Suppress the warning messages when compile engine selects impls
icemelon9 commented on pull request #5821: URL: https://github.com/apache/incubator-tvm/pull/5821#issuecomment-644468931 cc @merrymercy @anijain2305 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] icemelon9 opened a new pull request #5821: [AutoTVM] Suppress the warning messages when compile engine selects impls
icemelon9 opened a new pull request #5821: URL: https://github.com/apache/incubator-tvm/pull/5821 Currently compile engine compares all valid implementations and creates many warning messages that potentially confuses the users. This PR suppresses the excessive warning messages and only displays the fallback warning message if the selected implementation uses a fallback config. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] icemelon9 commented on pull request #5820: [Relay][OpStrategy] Tweak cublas priority level
icemelon9 commented on pull request #5820: URL: https://github.com/apache/incubator-tvm/pull/5820#issuecomment-68472 @comaniac Good catch. updated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] mbrookhart commented on a change in pull request #5797: [MergeComposite] Fix InferType when module contains Prelude
mbrookhart commented on a change in pull request #5797: URL: https://github.com/apache/incubator-tvm/pull/5797#discussion_r440502738 ## File path: src/relay/transforms/merge_composite.cc ## @@ -36,22 +36,24 @@ namespace tvm { namespace relay { namespace merge_composite { -Function InferType(const Function& expr) { - auto mod = IRModule::FromExpr(expr); +Function InferType(const Function& expr, const IRModule& m) { + IRModule mod(m); + mod->Update(mod->GetGlobalVar("main"), expr); Review comment: mmm, you're right, reading the pass infrastructure a little more today. The FunctionPass, however, doesn't seem to pass the information on what Function we're see down to the passes: https://github.com/apache/incubator-tvm/blob/8578096853eec5711bfcc9a3a68145fd12a135cb/src/relay/ir/transform.cc#L123-L132 I guess we can either change that API (which touches a lot of passes), or maybe invert this Map https://github.com/apache/incubator-tvm/blob/4347b41a5e64a2a453297b371232d6e101051b3c/include/tvm/ir/module.h#L53, find the global var, and store that in the class. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] zhiics edited a comment on issue #5792: [RUNTIME][IR] String operator+
zhiics edited a comment on issue #5792: URL: https://github.com/apache/incubator-tvm/issues/5792#issuecomment-644214755 #5806 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] icemelon9 opened a new pull request #5820: [Relay][OpStrategy] Tweak cublas priority level
icemelon9 opened a new pull request #5820: URL: https://github.com/apache/incubator-tvm/pull/5820 Increase the dense cublas priority level such that it'll be used by default when cublas is included in the target. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] icemelon9 commented on pull request #5819: [Frontend][MXNet] Support a few contrib ops in mxnet
icemelon9 commented on pull request #5819: URL: https://github.com/apache/incubator-tvm/pull/5819#issuecomment-644426890 @vinx13 @siju-samuel @kevinthesun @sxjscience Could you help take a look? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5806: [RUNTIME][String] Overload string operators
zhiics commented on a change in pull request #5806: URL: https://github.com/apache/incubator-tvm/pull/5806#discussion_r440483807 ## File path: include/tvm/runtime/container.h ## @@ -1410,10 +1344,114 @@ inline String& String::operator=(std::string other) { inline String& String::operator=(const char* other) { return operator=(std::string(other)); } -inline String operator+(const std::string lhs, const String& rhs) { - return lhs + rhs.operator std::string(); +template ::value || + std::is_same::value>::type, + typename = typename std::enable_if::value || + (std::is_same::value && + !std::is_same::value)>::type> +inline String operator+(const T& lhs, const U& rhs) { + size_t lhs_size = lhs.size(); + size_t rhs_size = rhs.size(); + char* concat = new char[lhs_size + rhs_size + 1]; + std::memcpy(concat, lhs.data(), lhs_size); + std::memcpy(concat + lhs_size, rhs.data(), rhs_size); + auto ptr = make_object(); Review comment: Let us use FromStd for now. The customized deleter is an incremental change by changing the constructor of `String(const char*)` which needs some extra efforts This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] icemelon9 commented on pull request #5780: [Fix] Fix recursive let for well formed check
icemelon9 commented on pull request #5780: URL: https://github.com/apache/incubator-tvm/pull/5780#issuecomment-644355394 Thanks @lixiaoquan. I've applied your patch. For other passes, I agree with @zhiics that we should rewrite them to non-recursive form. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (520ca0a -> 99745a4)
This is an automated email from the ASF dual-hosted git repository. haichen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from 520ca0a [CI] Limit number of threads in all jobs (#5815) add 99745a4 [COMMUNITY] Siju Samuel -> Committer (#5817) No new revisions were added by this update. Summary of changes: CONTRIBUTORS.md | 1 + 1 file changed, 1 insertion(+)
[GitHub] [incubator-tvm] icemelon9 merged pull request #5817: [COMMUNITY] Siju Samuel -> Committer
icemelon9 merged pull request #5817: URL: https://github.com/apache/incubator-tvm/pull/5817 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5754: [RFC] Improve quantized convolution performance for armv8 architectures
anijain2305 commented on a change in pull request #5754: URL: https://github.com/apache/incubator-tvm/pull/5754#discussion_r440384299 ## File path: python/tvm/relay/op/nn/nn.py ## @@ -1976,6 +1976,74 @@ def contrib_conv2d_winograd_without_weight_transform(data, kernel_layout, out_layout, out_dtype) +def contrib_conv2d_gemm_without_weight_transform(data, + weight, + strides=(1, 1), + padding=(0, 0), + dilation=(1, 1), + groups=1, + channels=None, + kernel_size=None, + data_layout="NCHW", + kernel_layout="OIHW", + out_layout="", + out_dtype=""): +r"""2D convolution with gemm algorithm. Review comment: Thanks for explanation. No need to remove. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5811: [AutoTVM][Fix] Fixed 'not in list' error for define_reorder with 'candidate' policy.
comaniac commented on a change in pull request #5811: URL: https://github.com/apache/incubator-tvm/pull/5811#discussion_r440383211 ## File path: python/tvm/autotvm/task/space.py ## @@ -119,21 +125,26 @@ def __init__(self, var, name=None): super(VirtualAxis, self).__init__() self.num_output = 1 -if name is None: -name = 'axis_%d' % VirtualAxis.name_ct -VirtualAxis.name_ct += 1 +self.name = None +if name is not None: +self.name = name -self.name = name if isinstance(var, (int, _long)): self.length = var +if name is None: +self.name = 'axis_%d' % VirtualAxis.name_ct +VirtualAxis.name_ct += 1 elif isinstance(var, schedule.IterVar): -self.name = var.var.name +if self.name is None: +self.name = var.var.name if var.dom is None: self.length = -1 else: self.length = get_const_int(var.dom.extent) elif isinstance(var, VirtualAxis): self.length = var.length +if self.name is None: +self.name = var.name Review comment: In my opinoin, the `name_ct` doesn't really matter because it just makes sure every name is unique, but I am not quite sure about the correct behavior to determine the name either. @merrymercy could you help clarify? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] comaniac commented on pull request #5811: [AutoTVM][Fix] Fixed 'not in list' error for define_reorder with 'candidate' policy.
comaniac commented on pull request #5811: URL: https://github.com/apache/incubator-tvm/pull/5811#issuecomment-644317657 > @comaniac A quick replication of the bug is to replace > https://github.com/apache/incubator-tvm/blob/520ca0a8b39aeb4765369f169477265230ea7c6c/tutorials/autotvm/tune_simple_template.py#L209 > > > with > ``` > cfg.define_reorder('reorder', [yo, xo, k, yi, xi], policy='candidate', candidate=[[yo, xo, k, yi, xi], [xo, yo, k, yi, xi]]) > cfg['reorder'].apply(s, C, [yo, xo, k, yi, xi]) > ``` I see what you meant. It seems like all use cases in current TOPI reorder only the splitted axises, which are all virtual axises already. Accordingly, I'd suggest checking and creating `VirtualAxis` for `axes` in the beginning of `ReorderSpace.__init__`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] icemelon9 opened a new pull request #5819: [Frontend][MXNet] Support a few contrib ops in mxnet
icemelon9 opened a new pull request #5819: URL: https://github.com/apache/incubator-tvm/pull/5819 Thanks the help from @eric-haibin-lin. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] moderato commented on pull request #5811: [AutoTVM][Fix] Fixed 'not in list' error for define_reorder with 'candidate' policy.
moderato commented on pull request #5811: URL: https://github.com/apache/incubator-tvm/pull/5811#issuecomment-644306923 @comaniac A quick replication of the bug is to replace https://github.com/apache/incubator-tvm/blob/520ca0a8b39aeb4765369f169477265230ea7c6c/tutorials/autotvm/tune_simple_template.py#L209 with ``` cfg.define_reorder('reorder', [yo, xo, k, yi, xi], policy='candidate', candidate=[[yo, xo, k, yi, xi], [xo, yo, k, yi, xi]]) cfg['reorder'].apply(s, C, [yo, xo, k, yi, xi]) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] moderato commented on pull request #5811: [AutoTVM][Fix] Fixed 'not in list' error for define_reorder with 'candidate' policy.
moderato commented on pull request #5811: URL: https://github.com/apache/incubator-tvm/pull/5811#issuecomment-644299685 @comaniac Thank you for the review! I look into the example you mentioned, and I think it works well because the candidates pass to kwargs of the `_add_new_transform` function are lists of `VirtualAxis`, while in my application candidates are lists of `IterVar`. Is `define_reorder` supposed to accept `IterVar`s as arguments? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] moderato commented on a change in pull request #5811: [AutoTVM][Fix] Fixed 'not in list' error for define_reorder with 'candidate' policy.
moderato commented on a change in pull request #5811: URL: https://github.com/apache/incubator-tvm/pull/5811#discussion_r440362082 ## File path: python/tvm/autotvm/task/space.py ## @@ -119,21 +125,26 @@ def __init__(self, var, name=None): super(VirtualAxis, self).__init__() self.num_output = 1 -if name is None: -name = 'axis_%d' % VirtualAxis.name_ct -VirtualAxis.name_ct += 1 +self.name = None +if name is not None: +self.name = name -self.name = name if isinstance(var, (int, _long)): self.length = var +if name is None: +self.name = 'axis_%d' % VirtualAxis.name_ct +VirtualAxis.name_ct += 1 elif isinstance(var, schedule.IterVar): -self.name = var.var.name +if self.name is None: +self.name = var.var.name if var.dom is None: self.length = -1 else: self.length = get_const_int(var.dom.extent) elif isinstance(var, VirtualAxis): self.length = var.length +if self.name is None: +self.name = var.name Review comment: Actually I'm a bit confused by the original semantic. It meant to overwrite the name with var's name if var is an IterVar, but not to overwrite if var is a VirtualAxis which already has a name. Plus, when name=None and var is an IterVar, name_ct increments by 1 but it's not used in naming. Is this what it is supposed to be? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5740: [Object] Introduce POD-C Compliant tvm::Map
junrushao1994 commented on a change in pull request #5740: URL: https://github.com/apache/incubator-tvm/pull/5740#discussion_r440341151 ## File path: include/tvm/node/container.h ## @@ -43,51 +44,1188 @@ using runtime::Downcast; using runtime::IterAdapter; using runtime::make_object; using runtime::Object; +using runtime::ObjectEqual; +using runtime::ObjectHash; using runtime::ObjectPtr; using runtime::ObjectPtrEqual; using runtime::ObjectPtrHash; using runtime::ObjectRef; using runtime::String; using runtime::StringObj; -/*! \brief String-aware ObjectRef hash functor */ -struct ObjectHash { - size_t operator()(const ObjectRef& a) const { -if (const auto* str = a.as()) { - return String::HashBytes(str->data, str->size); -} -return ObjectPtrHash()(a); +#if (TVM_USE_STL_MAP != 0) + +/*! \brief Shared content of all specializations of hash map */ +class MapNode : public Object { + public: + /*! \brief Type of the keys in the hash map */ + using key_type = ObjectRef; + /*! \brief Type of the values in the hash map */ + using mapped_type = ObjectRef; + /*! \brief Type of the actual underlying container */ + using ContainerType = std::unordered_map; + /*! \brief Iterator class */ + using iterator = ContainerType::iterator; + /*! \brief Iterator class */ + using const_iterator = ContainerType::const_iterator; + /*! \brief Type of value stored in the hash map */ + using KVType = ContainerType::value_type; + + static_assert(std::is_standard_layout::value, "KVType is not standard layout"); + static_assert(sizeof(KVType) == 16 || sizeof(KVType) == 8, "sizeof(KVType) incorrect"); + + static constexpr const uint32_t _type_index = runtime::TypeIndex::kRuntimeMap; + static constexpr const char* _type_key = "Map"; + TVM_DECLARE_FINAL_OBJECT_INFO(MapNode, Object); + + /*! + * \brief Number of elements in the SmallMapNode + * \return The result + */ + size_t size() const { return data_.size(); } + /*! + * \brief Count the number of times a key exists in the hash map + * \param key The indexing key + * \return The result, 0 or 1 + */ + size_t count(const key_type& key) const { return data_.count(key); } + /*! + * \brief Index value associated with a key, throw exception if the key does not exist + * \param key The indexing key + * \return The const reference to the value + */ + const mapped_type& at(const key_type& key) const { return data_.at(key); } + /*! + * \brief Index value associated with a key, throw exception if the key does not exist + * \param key The indexing key + * \return The mutable reference to the value + */ + mapped_type& at(const key_type& key) { return data_.at(key); } + /*! \return begin iterator */ + iterator begin() { return data_.begin(); } + /*! \return const begin iterator */ + const_iterator begin() const { return data_.begin(); } + /*! \return end iterator */ + iterator end() { return data_.end(); } + /*! \return end iterator */ + const_iterator end() const { return data_.end(); } + /*! + * \brief Index value associated with a key + * \param key The indexing key + * \return The iterator of the entry associated with the key, end iterator if not exists + */ + const_iterator find(const key_type& key) const { return data_.find(key); } + /*! + * \brief Index value associated with a key + * \param key The indexing key + * \return The iterator of the entry associated with the key, end iterator if not exists + */ + iterator find(const key_type& key) { return data_.find(key); } + /*! + * \brief Erase the entry associated with the iterator + * \param position The iterator + */ + void erase(const iterator& position) { data_.erase(position); } + /*! + * \brief Erase the entry associated with the key, do nothing if not exists + * \param key The indexing key + */ + void erase(const key_type& key) { data_.erase(key); } + + protected: + /*! + * \brief Create an empty container + * \return The object created + */ + static ObjectPtr Empty() { return make_object(); } + /*! + * \brief Create the map using contents from the given iterators. + * \param first Begin of iterator + * \param last End of iterator + * \tparam IterType The type of iterator + * \return ObjectPtr to the map created + */ + template + static ObjectPtr CreateFromRange(IterType first, IterType last) { +ObjectPtr p = make_object(); +p->data_ = std::unordered_map(first, last); +return p; + } + /*! + * \brief InsertMaybeReHash an entry into the given hash map + * \param kv The entry to be inserted + * \param map The pointer to the map, can be changed if re-hashing happens + */ + static void InsertMaybeReHash(const KVType& kv, ObjectPtr* map) { +MapNode* m = static_cast(map->get()); +m->data_[kv.first] = kv.second; } + /*! + * \brief Create an empty container with elements copying from another MapNode + * \param m The source container
[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5740: [Object] Introduce POD-C Compliant tvm::Map
junrushao1994 commented on a change in pull request #5740: URL: https://github.com/apache/incubator-tvm/pull/5740#discussion_r440340863 ## File path: include/tvm/node/container.h ## @@ -43,51 +44,1188 @@ using runtime::Downcast; using runtime::IterAdapter; using runtime::make_object; using runtime::Object; +using runtime::ObjectEqual; +using runtime::ObjectHash; using runtime::ObjectPtr; using runtime::ObjectPtrEqual; using runtime::ObjectPtrHash; using runtime::ObjectRef; using runtime::String; using runtime::StringObj; -/*! \brief String-aware ObjectRef hash functor */ -struct ObjectHash { - size_t operator()(const ObjectRef& a) const { -if (const auto* str = a.as()) { - return String::HashBytes(str->data, str->size); -} -return ObjectPtrHash()(a); +#if (TVM_USE_STL_MAP != 0) + +/*! \brief Shared content of all specializations of hash map */ +class MapNode : public Object { + public: + /*! \brief Type of the keys in the hash map */ + using key_type = ObjectRef; + /*! \brief Type of the values in the hash map */ + using mapped_type = ObjectRef; + /*! \brief Type of the actual underlying container */ + using ContainerType = std::unordered_map; + /*! \brief Iterator class */ + using iterator = ContainerType::iterator; + /*! \brief Iterator class */ + using const_iterator = ContainerType::const_iterator; + /*! \brief Type of value stored in the hash map */ + using KVType = ContainerType::value_type; + + static_assert(std::is_standard_layout::value, "KVType is not standard layout"); + static_assert(sizeof(KVType) == 16 || sizeof(KVType) == 8, "sizeof(KVType) incorrect"); + + static constexpr const uint32_t _type_index = runtime::TypeIndex::kRuntimeMap; + static constexpr const char* _type_key = "Map"; + TVM_DECLARE_FINAL_OBJECT_INFO(MapNode, Object); + + /*! + * \brief Number of elements in the SmallMapNode + * \return The result + */ + size_t size() const { return data_.size(); } + /*! + * \brief Count the number of times a key exists in the hash map + * \param key The indexing key + * \return The result, 0 or 1 + */ + size_t count(const key_type& key) const { return data_.count(key); } + /*! + * \brief Index value associated with a key, throw exception if the key does not exist + * \param key The indexing key + * \return The const reference to the value + */ + const mapped_type& at(const key_type& key) const { return data_.at(key); } + /*! + * \brief Index value associated with a key, throw exception if the key does not exist + * \param key The indexing key + * \return The mutable reference to the value + */ + mapped_type& at(const key_type& key) { return data_.at(key); } + /*! \return begin iterator */ + iterator begin() { return data_.begin(); } + /*! \return const begin iterator */ + const_iterator begin() const { return data_.begin(); } + /*! \return end iterator */ + iterator end() { return data_.end(); } + /*! \return end iterator */ + const_iterator end() const { return data_.end(); } + /*! + * \brief Index value associated with a key + * \param key The indexing key + * \return The iterator of the entry associated with the key, end iterator if not exists + */ + const_iterator find(const key_type& key) const { return data_.find(key); } + /*! + * \brief Index value associated with a key + * \param key The indexing key + * \return The iterator of the entry associated with the key, end iterator if not exists + */ + iterator find(const key_type& key) { return data_.find(key); } + /*! + * \brief Erase the entry associated with the iterator + * \param position The iterator + */ + void erase(const iterator& position) { data_.erase(position); } + /*! + * \brief Erase the entry associated with the key, do nothing if not exists + * \param key The indexing key + */ + void erase(const key_type& key) { data_.erase(key); } + + protected: + /*! + * \brief Create an empty container + * \return The object created + */ + static ObjectPtr Empty() { return make_object(); } + /*! + * \brief Create the map using contents from the given iterators. + * \param first Begin of iterator + * \param last End of iterator + * \tparam IterType The type of iterator + * \return ObjectPtr to the map created + */ + template + static ObjectPtr CreateFromRange(IterType first, IterType last) { +ObjectPtr p = make_object(); +p->data_ = std::unordered_map(first, last); +return p; + } + /*! + * \brief InsertMaybeReHash an entry into the given hash map + * \param kv The entry to be inserted + * \param map The pointer to the map, can be changed if re-hashing happens + */ + static void InsertMaybeReHash(const KVType& kv, ObjectPtr* map) { +MapNode* m = static_cast(map->get()); +m->data_[kv.first] = kv.second; } + /*! + * \brief Create an empty container with elements copying from another MapNode + * \param m The source container
[GitHub] [incubator-tvm] ANSHUMAN87 opened a new pull request #5818: Error msg update
ANSHUMAN87 opened a new pull request #5818: URL: https://github.com/apache/incubator-tvm/pull/5818 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on issue #5559: [TIR] Bugs in HoistIfThenElse
tqchen commented on issue #5559: URL: https://github.com/apache/incubator-tvm/issues/5559#issuecomment-644270821 Given that this pass is not product ready and we have not yet migrated this pass to the transform. Perhaps we can remove the pass for now, and then add it back once we have a better impl. Leaving the thread open for a week to see how would everyone think This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5797: [MergeComposite] Fix InferType when module contains Prelude
comaniac commented on a change in pull request #5797: URL: https://github.com/apache/incubator-tvm/pull/5797#discussion_r440335267 ## File path: src/relay/transforms/merge_composite.cc ## @@ -36,22 +36,24 @@ namespace tvm { namespace relay { namespace merge_composite { -Function InferType(const Function& expr) { - auto mod = IRModule::FromExpr(expr); +Function InferType(const Function& expr, const IRModule& m) { + IRModule mod(m); + mod->Update(mod->GetGlobalVar("main"), expr); Review comment: Since now we update the original module with new transformed function, should we update the corresponding function instead of `main`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #5815: [CI] Limit number of threads in all jobs
tqchen commented on pull request #5815: URL: https://github.com/apache/incubator-tvm/pull/5815#issuecomment-644268424 Merging for now as the current loc changes is small, we can do more delibration about option consolidation as follow ups, Thanks @u99127 @zhiics ! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen merged pull request #5815: [CI] Limit number of threads in all jobs
tqchen merged pull request #5815: URL: https://github.com/apache/incubator-tvm/pull/5815 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (d97dcd7 -> 520ca0a)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from d97dcd7 Pin hand landmark network to version 0.7.4. (#5813) add 520ca0a [CI] Limit number of threads in all jobs (#5815) No new revisions were added by this update. Summary of changes: tests/scripts/task_cpp_unittest.sh | 4 tests/scripts/task_golang.sh | 4 tests/scripts/task_java_unittest.sh | 4 tests/scripts/task_python_docs.sh| 4 tests/scripts/task_python_integration.sh | 2 ++ tests/scripts/task_python_topi.sh| 5 + tests/scripts/task_python_vta_fsim.sh| 4 tests/scripts/task_python_vta_tsim.sh| 4 tests/scripts/task_rust.sh | 4 9 files changed, 35 insertions(+)
[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5806: [RUNTIME][String] Overload string operators
zhiics commented on a change in pull request #5806: URL: https://github.com/apache/incubator-tvm/pull/5806#discussion_r440329966 ## File path: include/tvm/runtime/container.h ## @@ -1410,10 +1344,114 @@ inline String& String::operator=(std::string other) { inline String& String::operator=(const char* other) { return operator=(std::string(other)); } -inline String operator+(const std::string lhs, const String& rhs) { - return lhs + rhs.operator std::string(); +template ::value || + std::is_same::value>::type, + typename = typename std::enable_if::value || + (std::is_same::value && + !std::is_same::value)>::type> +inline String operator+(const T& lhs, const U& rhs) { + size_t lhs_size = lhs.size(); + size_t rhs_size = rhs.size(); + char* concat = new char[lhs_size + rhs_size + 1]; + std::memcpy(concat, lhs.data(), lhs_size); + std::memcpy(concat + lhs_size, rhs.data(), rhs_size); + auto ptr = make_object(); Review comment: we can probably have a `StringObj::FromCharPtr` for `const char*` and make the constructor use `make_inplace_array`. Currently we just use `std::string` for the `const char*`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #5817: [COMMUNITY] Siju Samuel -> Committer
ANSHUMAN87 commented on pull request #5817: URL: https://github.com/apache/incubator-tvm/pull/5817#issuecomment-644264092 Congratulations Siju!!! :+1: This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5811: [AutoTVM][Fix] Fixed 'not in list' error for define_reorder with 'candidate' policy.
comaniac commented on a change in pull request #5811: URL: https://github.com/apache/incubator-tvm/pull/5811#discussion_r440313183 ## File path: python/tvm/autotvm/task/space.py ## @@ -119,21 +125,26 @@ def __init__(self, var, name=None): super(VirtualAxis, self).__init__() self.num_output = 1 -if name is None: -name = 'axis_%d' % VirtualAxis.name_ct -VirtualAxis.name_ct += 1 +self.name = None +if name is not None: +self.name = name -self.name = name if isinstance(var, (int, _long)): self.length = var +if name is None: +self.name = 'axis_%d' % VirtualAxis.name_ct +VirtualAxis.name_ct += 1 elif isinstance(var, schedule.IterVar): -self.name = var.var.name +if self.name is None: +self.name = var.var.name if var.dom is None: self.length = -1 else: self.length = get_const_int(var.dom.extent) elif isinstance(var, VirtualAxis): self.length = var.length +if self.name is None: +self.name = var.name Review comment: It seems doesn't match the original semantic? ## File path: python/tvm/autotvm/task/space.py ## @@ -119,21 +125,26 @@ def __init__(self, var, name=None): super(VirtualAxis, self).__init__() self.num_output = 1 -if name is None: -name = 'axis_%d' % VirtualAxis.name_ct -VirtualAxis.name_ct += 1 +self.name = None +if name is not None: +self.name = name Review comment: You can just use `self.name = name` to cover 3 lines. ## File path: python/tvm/autotvm/task/space.py ## @@ -119,21 +125,26 @@ def __init__(self, var, name=None): super(VirtualAxis, self).__init__() self.num_output = 1 -if name is None: -name = 'axis_%d' % VirtualAxis.name_ct -VirtualAxis.name_ct += 1 +self.name = None +if name is not None: +self.name = name -self.name = name if isinstance(var, (int, _long)): self.length = var +if name is None: +self.name = 'axis_%d' % VirtualAxis.name_ct +VirtualAxis.name_ct += 1 elif isinstance(var, schedule.IterVar): -self.name = var.var.name +if self.name is None: +self.name = var.var.name if var.dom is None: self.length = -1 else: self.length = get_const_int(var.dom.extent) elif isinstance(var, VirtualAxis): Review comment: Modify the class docstring (`var: int or tvm.te.schedule.IterVar`) to add `VirtualAxis`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5815: [CI] Limit number of threads in all jobs
tqchen commented on a change in pull request #5815: URL: https://github.com/apache/incubator-tvm/pull/5815#discussion_r440328308 ## File path: tests/scripts/task_python_vta_fsim.sh ## @@ -20,6 +20,10 @@ set -e set -u source tests/scripts/setup-pytest-env.sh +# to avoid CI thread throttling. +export TVM_BIND_THREADS=0 +export OMP_NUM_THREADS=1 + Review comment: I meant https://docs.docker.com/config/containers/resource_constraints/, to directly restrict the CPUs that a container can access physically. although we might need some experimentation to see which option is the best, and how are they presented to the runtime. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5806: [RUNTIME][String] Overload string operators
junrushao1994 commented on a change in pull request #5806: URL: https://github.com/apache/incubator-tvm/pull/5806#discussion_r440328106 ## File path: include/tvm/runtime/container.h ## @@ -1410,10 +1344,114 @@ inline String& String::operator=(std::string other) { inline String& String::operator=(const char* other) { return operator=(std::string(other)); } -inline String operator+(const std::string lhs, const String& rhs) { - return lhs + rhs.operator std::string(); +template ::value || + std::is_same::value>::type, + typename = typename std::enable_if::value || + (std::is_same::value && + !std::is_same::value)>::type> +inline String operator+(const T& lhs, const U& rhs) { + size_t lhs_size = lhs.size(); + size_t rhs_size = rhs.size(); + char* concat = new char[lhs_size + rhs_size + 1]; + std::memcpy(concat, lhs.data(), lhs_size); + std::memcpy(concat + lhs_size, rhs.data(), rhs_size); + auto ptr = make_object(); Review comment: yes, you are right. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #5816: Type check added for FloatImm
tqchen commented on pull request #5816: URL: https://github.com/apache/incubator-tvm/pull/5816#issuecomment-644262612 The other message corrections looks good, please send another PR :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen opened a new pull request #5817: [COMMUNITY] Siju Samuel -> Committer
tqchen opened a new pull request #5817: URL: https://github.com/apache/incubator-tvm/pull/5817 Please join us to welcome @siju-samuel as a new committer of the TVM community. He has been actively contributing to various frontends to the TVM including TFLite, darknet and qnn. - [Commits](https://github.com/apache/incubator-tvm/commits?author=siju-samuel) - [Code Review](https://github.com/apache/incubator-tvm/pulls?utf8=%E2%9C%93=reviewed-by%3Asiju-samuel) - [Community](https://discuss.tvm.ai/u/siju-samuel/summary) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #5816: Type check added for FloatImm
ANSHUMAN87 commented on pull request #5816: URL: https://github.com/apache/incubator-tvm/pull/5816#issuecomment-644259958 > Thanks @ANSHUMAN87 .There is a reason why we did not add type check to the codebas here(to support custom data types) as show in here. > > Closing for now as the PR does not corresponds to a case that we would like to fix. I believe other error message correction can be taken, those were confusing when i was debugging one issue. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] ANSHUMAN87 commented on a change in pull request #5816: Type check added for FloatImm
ANSHUMAN87 commented on a change in pull request #5816: URL: https://github.com/apache/incubator-tvm/pull/5816#discussion_r440324179 ## File path: tests/python/unittest/test_target_custom_datatypes.py ## @@ -128,7 +128,7 @@ def test_bfloat_add_and_cast_FloatImm(): Z = topi.cast( topi.add( topi.cast(X, dtype="custom[bfloat]16"), -tvm.tir.FloatImm("custom[bfloat]16", 1.5)), +tvm.tir.FloatImm("float", 1.5)), Review comment: I am sorry, i did not understand completely the logic behind it. I mean how FloatImm can support datatype other than float. It looks like some potential hole. That is why i added check. Anyway if it is on purpose, then i will ignore it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] t-vi commented on pull request #5791: Add a combine batch_matmul pass
t-vi commented on pull request #5791: URL: https://github.com/apache/incubator-tvm/pull/5791#issuecomment-644258600 @mbrookhart Yes, my use-case is transformers. The PyTorch frontend translates the matmul used in HuggingFace `transformer`'s BERT into `batch_matmul`. The speedup is 1.5x-2x-ish on ROCm (gfx906) and also some on a GTX1080Ti even though it currently hits a reshape right after `batch_matmul`. I don't quite reach the speed of ONNXRuntime yet. I'm currently preparing a detailed writeup (and that's the pattern of my recent PRs - tuneable BMM, this, support for integers and other non-float32 PyTorch frontend). I imagine that it would be cool to move the pass to a pattern-matching. I would expect that it would replace the code shared by the combine passes of `batch_matmul` and `conv2d` (and to some extend the `dense` combiner rather than the part that's separate. I have been wondering about the efficiency of `dense` btw. - it mentions BERT as a use-case in the code comments but it is unclear to me whether the `dense` -> `batch_matmul` with "duplicated" (possibly stride 0) input is better than `dense` -> `dense` with non-contiguous results (though the columns would still be and only the rows would be interleaved between the ops). But then I haven't looked a lot at how TVM deals with strides (which is relatively significant because the self-attention typically has some reshapes that would be nice to fuse). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] u99127 commented on a change in pull request #5815: [CI] Limit number of threads in all jobs
u99127 commented on a change in pull request #5815: URL: https://github.com/apache/incubator-tvm/pull/5815#discussion_r440319264 ## File path: tests/scripts/task_python_vta_fsim.sh ## @@ -20,6 +20,10 @@ set -e set -u source tests/scripts/setup-pytest-env.sh +# to avoid CI thread throttling. +export TVM_BIND_THREADS=0 +export OMP_NUM_THREADS=1 + Review comment: Thanks for the reply. Agreed on the rust and cpp environments which is why I suggested setup-test-env.sh :). On the second point I would have thought the lower level driver script be able to override the environment variable for the test ? I'd like to understand more about what you mean by directly limiting the resources of the container. regards Ramana This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5816: Type check added for FloatImm
tqchen commented on a change in pull request #5816: URL: https://github.com/apache/incubator-tvm/pull/5816#discussion_r440316185 ## File path: tests/python/unittest/test_target_custom_datatypes.py ## @@ -128,7 +128,7 @@ def test_bfloat_add_and_cast_FloatImm(): Z = topi.cast( topi.add( topi.cast(X, dtype="custom[bfloat]16"), -tvm.tir.FloatImm("custom[bfloat]16", 1.5)), +tvm.tir.FloatImm("float", 1.5)), Review comment: There is a reason why we did not add type check to the codebas here(to support custom data types) as show in here. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #5816: Type check added for FloatImm
tqchen commented on pull request #5816: URL: https://github.com/apache/incubator-tvm/pull/5816#issuecomment-644253039 Thanks @ANSHUMAN87 .There is a reason why we did not add type check to the codebas here(to support custom data types) as show in here. Closing for now as the PR does not corresponds to a case that we would like to fix. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen closed pull request #5816: Type check added for FloatImm
tqchen closed pull request #5816: URL: https://github.com/apache/incubator-tvm/pull/5816 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5806: [RUNTIME][String] Overload string operators
tqchen commented on a change in pull request #5806: URL: https://github.com/apache/incubator-tvm/pull/5806#discussion_r440309404 ## File path: include/tvm/runtime/container.h ## @@ -1410,10 +1344,114 @@ inline String& String::operator=(std::string other) { inline String& String::operator=(const char* other) { return operator=(std::string(other)); } -inline String operator+(const std::string lhs, const String& rhs) { - return lhs + rhs.operator std::string(); +template ::value || + std::is_same::value>::type, + typename = typename std::enable_if::value || + (std::is_same::value && + !std::is_same::value)>::type> +inline String operator+(const T& lhs, const U& rhs) { + size_t lhs_size = lhs.size(); + size_t rhs_size = rhs.size(); + char* concat = new char[lhs_size + rhs_size + 1]; + std::memcpy(concat, lhs.data(), lhs_size); + std::memcpy(concat + lhs_size, rhs.data(), rhs_size); + auto ptr = make_object(); Review comment: There is no dispatch needed because all operations only differs in creation time This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] mbrookhart commented on pull request #5791: Add a combine batch_matmul pass
mbrookhart commented on pull request #5791: URL: https://github.com/apache/incubator-tvm/pull/5791#issuecomment-644241874 @t-vi Interesting PR! Thank you for submitting it! I presume the use case for this is in Transformer-like models? Do you see a perf benefit from the rewrite? > This might be off the topic. I think it's possible to use the pattern language to replace all "combine parallel XX op" passes. Maybe we could create an issue to check it. > > cc @mbrookhart @tqchen I would imagine yes, it would be pretty easy to implement this as a pattern and a rewrite. A pattern solution might have some complication though, I don't think we currently have a way to match something with 2 or 3 or 4 branches in the same pattern, it would require a number of patterns. I'll think about that. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] ANSHUMAN87 opened a new pull request #5816: Type check added for FloatImm
ANSHUMAN87 opened a new pull request #5816: URL: https://github.com/apache/incubator-tvm/pull/5816 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5806: [RUNTIME][String] Overload string operators
junrushao1994 commented on a change in pull request #5806: URL: https://github.com/apache/incubator-tvm/pull/5806#discussion_r440296478 ## File path: include/tvm/runtime/container.h ## @@ -1410,10 +1344,114 @@ inline String& String::operator=(std::string other) { inline String& String::operator=(const char* other) { return operator=(std::string(other)); } -inline String operator+(const std::string lhs, const String& rhs) { - return lhs + rhs.operator std::string(); +template ::value || + std::is_same::value>::type, + typename = typename std::enable_if::value || + (std::is_same::value && + !std::is_same::value)>::type> +inline String operator+(const T& lhs, const U& rhs) { + size_t lhs_size = lhs.size(); + size_t rhs_size = rhs.size(); + char* concat = new char[lhs_size + rhs_size + 1]; + std::memcpy(concat, lhs.data(), lhs_size); + std::memcpy(concat + lhs_size, rhs.data(), rhs_size); + auto ptr = make_object(); Review comment: In my implementation of runtime::Array, I overload `make_object` to `make_inplace_array`; In Map, I dispatch `make_object` to `make_object` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5806: [RUNTIME][String] Overload string operators
junrushao1994 commented on a change in pull request #5806: URL: https://github.com/apache/incubator-tvm/pull/5806#discussion_r440295471 ## File path: include/tvm/runtime/container.h ## @@ -1410,10 +1344,114 @@ inline String& String::operator=(std::string other) { inline String& String::operator=(const char* other) { return operator=(std::string(other)); } -inline String operator+(const std::string lhs, const String& rhs) { - return lhs + rhs.operator std::string(); +template ::value || + std::is_same::value>::type, + typename = typename std::enable_if::value || + (std::is_same::value && + !std::is_same::value)>::type> +inline String operator+(const T& lhs, const U& rhs) { + size_t lhs_size = lhs.size(); + size_t rhs_size = rhs.size(); + char* concat = new char[lhs_size + rhs_size + 1]; + std::memcpy(concat, lhs.data(), lhs_size); + std::memcpy(concat + lhs_size, rhs.data(), rhs_size); + auto ptr = make_object(); Review comment: We should 1) mark `make_object` as deleted, in case there is any misuse in the codebase 2) have a separate subclass of StringObj to make it clearer This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] junrushao1994 commented on pull request #5740: [Object] Introduce POD-C Compliant tvm::Map
junrushao1994 commented on pull request #5740: URL: https://github.com/apache/incubator-tvm/pull/5740#issuecomment-644234500 @zhiics @icemelon9 could you guys take another look? much appreciated! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5806: [RUNTIME][String] Overload string operators
tqchen commented on a change in pull request #5806: URL: https://github.com/apache/incubator-tvm/pull/5806#discussion_r440277395 ## File path: include/tvm/runtime/container.h ## @@ -1410,10 +1344,114 @@ inline String& String::operator=(std::string other) { inline String& String::operator=(const char* other) { return operator=(std::string(other)); } -inline String operator+(const std::string lhs, const String& rhs) { - return lhs + rhs.operator std::string(); +template ::value || + std::is_same::value>::type, + typename = typename std::enable_if::value || + (std::is_same::value && + !std::is_same::value)>::type> +inline String operator+(const T& lhs, const U& rhs) { Review comment: The match is a bit too broad, let us still directly enumerate the combinations that involves String This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5806: [RUNTIME][String] Overload string operators
tqchen commented on a change in pull request #5806: URL: https://github.com/apache/incubator-tvm/pull/5806#discussion_r440275128 ## File path: include/tvm/runtime/container.h ## @@ -1410,10 +1344,114 @@ inline String& String::operator=(std::string other) { inline String& String::operator=(const char* other) { return operator=(std::string(other)); } -inline String operator+(const std::string lhs, const String& rhs) { - return lhs + rhs.operator std::string(); +template ::value || + std::is_same::value>::type, + typename = typename std::enable_if::value || + (std::is_same::value && + !std::is_same::value)>::type> +inline String operator+(const T& lhs, const U& rhs) { + size_t lhs_size = lhs.size(); + size_t rhs_size = rhs.size(); + char* concat = new char[lhs_size + rhs_size + 1]; + std::memcpy(concat, lhs.data(), lhs_size); + std::memcpy(concat + lhs_size, rhs.data(), rhs_size); + auto ptr = make_object(); Review comment: We will need to have a customized deleter. Directly allocating StringObj will cause the memory not being released (because the deleter won't recycle the data in the StringObj). Let us use StringObj::FromStd for now. Alternatively, create another subclass of StringObj that contains the customized deleter(hopefully via InplaceArray, to further reduce indirection) cc @junrushao1994 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5806: [RUNTIME][String] Overload string operators
tqchen commented on a change in pull request #5806: URL: https://github.com/apache/incubator-tvm/pull/5806#discussion_r440275128 ## File path: include/tvm/runtime/container.h ## @@ -1410,10 +1344,114 @@ inline String& String::operator=(std::string other) { inline String& String::operator=(const char* other) { return operator=(std::string(other)); } -inline String operator+(const std::string lhs, const String& rhs) { - return lhs + rhs.operator std::string(); +template ::value || + std::is_same::value>::type, + typename = typename std::enable_if::value || + (std::is_same::value && + !std::is_same::value)>::type> +inline String operator+(const T& lhs, const U& rhs) { + size_t lhs_size = lhs.size(); + size_t rhs_size = rhs.size(); + char* concat = new char[lhs_size + rhs_size + 1]; + std::memcpy(concat, lhs.data(), lhs_size); + std::memcpy(concat + lhs_size, rhs.data(), rhs_size); + auto ptr = make_object(); Review comment: We will need to have a customized deleter. Directly allocating StringObj will cause the memory not being released (because the deleter won't recycle the data in the StringObj). Let us use StringObj::FromStd for now. Alternatively, create another subclass of StringObj that contains the customized deleter(hopefully via inplaceArray) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5806: [RUNTIME][String] Overload string operators
tqchen commented on a change in pull request #5806: URL: https://github.com/apache/incubator-tvm/pull/5806#discussion_r440275128 ## File path: include/tvm/runtime/container.h ## @@ -1410,10 +1344,114 @@ inline String& String::operator=(std::string other) { inline String& String::operator=(const char* other) { return operator=(std::string(other)); } -inline String operator+(const std::string lhs, const String& rhs) { - return lhs + rhs.operator std::string(); +template ::value || + std::is_same::value>::type, + typename = typename std::enable_if::value || + (std::is_same::value && + !std::is_same::value)>::type> +inline String operator+(const T& lhs, const U& rhs) { + size_t lhs_size = lhs.size(); + size_t rhs_size = rhs.size(); + char* concat = new char[lhs_size + rhs_size + 1]; + std::memcpy(concat, lhs.data(), lhs_size); + std::memcpy(concat + lhs_size, rhs.data(), rhs_size); + auto ptr = make_object(); Review comment: We will need to have a customized deleter. Directly allocating will cause the memory not being released. Let us use StringObj::FromStd for now. Alternatively, create another subclass of StringObj that contains the customized deleter(hopefully via inplaceArray) ## File path: include/tvm/runtime/container.h ## @@ -1410,10 +1344,114 @@ inline String& String::operator=(std::string other) { inline String& String::operator=(const char* other) { return operator=(std::string(other)); } -inline String operator+(const std::string lhs, const String& rhs) { - return lhs + rhs.operator std::string(); +template ::value || + std::is_same::value>::type, + typename = typename std::enable_if::value || + (std::is_same::value && + !std::is_same::value)>::type> +inline String operator+(const T& lhs, const U& rhs) { + size_t lhs_size = lhs.size(); + size_t rhs_size = rhs.size(); + char* concat = new char[lhs_size + rhs_size + 1]; + std::memcpy(concat, lhs.data(), lhs_size); + std::memcpy(concat + lhs_size, rhs.data(), rhs_size); + auto ptr = make_object(); + ptr->size = lhs_size + rhs_size; + ptr->data = concat; + return String(ptr); +} + +inline String operator+(const char* lhs, const String& rhs) { + size_t lhs_size = std::strlen(lhs); + size_t rhs_size = rhs.size(); + char* concat = new char[lhs_size + rhs_size + 1]; + std::memcpy(concat, lhs, lhs_size); + std::memcpy(concat + lhs_size, rhs.data(), rhs_size); Review comment: Createa common private static function, Concat(lhs, size, rhs, size) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #5770: [BYOC][runtime] Separate code and metadata for CSourceModule
lhutton1 commented on a change in pull request #5770: URL: https://github.com/apache/incubator-tvm/pull/5770#discussion_r440275397 ## File path: python/tvm/runtime/module.py ## @@ -222,21 +222,29 @@ def evaluator(*args): except NameError: raise NameError("time_evaluate is only supported when RPC is enabled") -def _collect_dso_modules(self): -"""Helper function to collect dso modules, then return it.""" -visited, stack, dso_modules = set(), [], [] +def _collect_dso_metadata_modules(self): +""" +Helper function to collect dso modules and metadata init module. There +is at most one medata init module if it exists. +""" +visited, stack, dso_modules, metadata_init = set(), [], [], None # append root module visited.add(self) stack.append(self) while stack: module = stack.pop() if module._dso_exportable(): dso_modules.append(module) +elif module.type_key == "module_init": +assert not metadata_init, \ +"At most one module initializer is allowed" Review comment: Makes sense thanks, I was thinking one MetadataModule would be created for each backend. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] zhiics commented on issue #5792: [RUNTIME][IR] String operator+
zhiics commented on issue #5792: URL: https://github.com/apache/incubator-tvm/issues/5792#issuecomment-644214755 #5770 5806 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5770: [BYOC][runtime] Separate code and metadata for CSourceModule
zhiics commented on a change in pull request #5770: URL: https://github.com/apache/incubator-tvm/pull/5770#discussion_r440266641 ## File path: python/tvm/runtime/module.py ## @@ -222,21 +222,29 @@ def evaluator(*args): except NameError: raise NameError("time_evaluate is only supported when RPC is enabled") -def _collect_dso_modules(self): -"""Helper function to collect dso modules, then return it.""" -visited, stack, dso_modules = set(), [], [] +def _collect_dso_metadata_modules(self): +""" +Helper function to collect dso modules and metadata init module. There +is at most one medata init module if it exists. +""" +visited, stack, dso_modules, metadata_init = set(), [], [], None # append root module visited.add(self) stack.append(self) while stack: module = stack.pop() if module._dso_exportable(): dso_modules.append(module) +elif module.type_key == "module_init": +assert not metadata_init, \ +"At most one module initializer is allowed" Review comment: The python side changes will be all removed. We have a MetadataModule to dispatch the initialization. We don't need to consider about priority because they symbols are unique. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5815: [CI] Limit number of threads in all jobs
tqchen commented on a change in pull request #5815: URL: https://github.com/apache/incubator-tvm/pull/5815#discussion_r440269311 ## File path: tests/scripts/task_python_vta_fsim.sh ## @@ -20,6 +20,10 @@ set -e set -u source tests/scripts/setup-pytest-env.sh +# to avoid CI thread throttling. +export TVM_BIND_THREADS=0 +export OMP_NUM_THREADS=1 + Review comment: I have thought about that. However we will also need to setup the same env for rust and cpp tests, and different test might have a different threading setting (we want to have one multi-threaded case). So i end up concluded that it is worth the duplication and not doing so. An alternative might be try to directly limit the resources of the docker container. Will look into it and report back This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5815: [CI] Limit number of threads in all jobs
tqchen commented on a change in pull request #5815: URL: https://github.com/apache/incubator-tvm/pull/5815#discussion_r440269311 ## File path: tests/scripts/task_python_vta_fsim.sh ## @@ -20,6 +20,10 @@ set -e set -u source tests/scripts/setup-pytest-env.sh +# to avoid CI thread throttling. +export TVM_BIND_THREADS=0 +export OMP_NUM_THREADS=1 + Review comment: I have thought about that. However we will also need to setup the same env for rust and cpp tests, and different test might have a different threading setting (we want to have one multi-threaded case). So i end up not doing so. An alternative might be try to directly limit the resources of the docker container. Will look into it and report back This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5815: [CI] Limit number of threads in all jobs
tqchen commented on a change in pull request #5815: URL: https://github.com/apache/incubator-tvm/pull/5815#discussion_r440269311 ## File path: tests/scripts/task_python_vta_fsim.sh ## @@ -20,6 +20,10 @@ set -e set -u source tests/scripts/setup-pytest-env.sh +# to avoid CI thread throttling. +export TVM_BIND_THREADS=0 +export OMP_NUM_THREADS=1 + Review comment: I have thought about that. However we will also need to setup the same env for rust and cpp tests, and different specific test might have a different threading setting. So i end up not doing so. An alternative might be try to directly limit the resources of the docker container. Will look into it and report back This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] u99127 commented on a change in pull request #5815: [CI] Limit number of threads in all jobs
u99127 commented on a change in pull request #5815: URL: https://github.com/apache/incubator-tvm/pull/5815#discussion_r440267908 ## File path: tests/scripts/task_python_vta_fsim.sh ## @@ -20,6 +20,10 @@ set -e set -u source tests/scripts/setup-pytest-env.sh +# to avoid CI thread throttling. +export TVM_BIND_THREADS=0 +export OMP_NUM_THREADS=1 + Review comment: Why not move this into setup-pytest-env.sh and maybe rename it to setup-test-env.sh if you think it's too pytest centric ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5770: [BYOC][runtime] Separate code and metadata for CSourceModule
zhiics commented on a change in pull request #5770: URL: https://github.com/apache/incubator-tvm/pull/5770#discussion_r440266641 ## File path: python/tvm/runtime/module.py ## @@ -222,21 +222,29 @@ def evaluator(*args): except NameError: raise NameError("time_evaluate is only supported when RPC is enabled") -def _collect_dso_modules(self): -"""Helper function to collect dso modules, then return it.""" -visited, stack, dso_modules = set(), [], [] +def _collect_dso_metadata_modules(self): +""" +Helper function to collect dso modules and metadata init module. There +is at most one medata init module if it exists. +""" +visited, stack, dso_modules, metadata_init = set(), [], [], None # append root module visited.add(self) stack.append(self) while stack: module = stack.pop() if module._dso_exportable(): dso_modules.append(module) +elif module.type_key == "module_init": +assert not metadata_init, \ +"At most one module initializer is allowed" Review comment: The python side changes will be all removed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #5770: [BYOC][runtime] Separate code and metadata for CSourceModule
lhutton1 commented on a change in pull request #5770: URL: https://github.com/apache/incubator-tvm/pull/5770#discussion_r440167425 ## File path: python/tvm/runtime/module.py ## @@ -222,21 +222,29 @@ def evaluator(*args): except NameError: raise NameError("time_evaluate is only supported when RPC is enabled") -def _collect_dso_modules(self): -"""Helper function to collect dso modules, then return it.""" -visited, stack, dso_modules = set(), [], [] +def _collect_dso_metadata_modules(self): +""" +Helper function to collect dso modules and metadata init module. There +is at most one medata init module if it exists. Review comment: *metadata ## File path: python/tvm/runtime/module.py ## @@ -222,21 +222,29 @@ def evaluator(*args): except NameError: raise NameError("time_evaluate is only supported when RPC is enabled") -def _collect_dso_modules(self): -"""Helper function to collect dso modules, then return it.""" -visited, stack, dso_modules = set(), [], [] +def _collect_dso_metadata_modules(self): +""" +Helper function to collect dso modules and metadata init module. There +is at most one medata init module if it exists. +""" +visited, stack, dso_modules, metadata_init = set(), [], [], None # append root module visited.add(self) stack.append(self) while stack: module = stack.pop() if module._dso_exportable(): dso_modules.append(module) +elif module.type_key == "module_init": +assert not metadata_init, \ +"At most one module initializer is allowed" Review comment: How does this interact in the case of multiple backends that require the use of metadata module? For example it may be the case in the future that we want to specify something like: DNNL (first priority), then C-Codegen (second priority), then TVM (fallback). If only one metadata module is allowed, would this contain information for both DNNL and C-Codegen? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #5815: [CI] Limit number of threads in all jobs
tqchen commented on pull request #5815: URL: https://github.com/apache/incubator-tvm/pull/5815#issuecomment-644207809 cc @yzhliu @zhiics @icemelon9 @tmoreau89 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen opened a new pull request #5815: [CI] Limit number of threads in all jobs
tqchen opened a new pull request #5815: URL: https://github.com/apache/incubator-tvm/pull/5815 To reduce pressure on a crowded CI setting. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5754: [RFC] Improve quantized convolution performance for armv8 architectures
FrozenGene commented on a change in pull request #5754: URL: https://github.com/apache/incubator-tvm/pull/5754#discussion_r440237827 ## File path: python/tvm/relay/op/nn/nn.py ## @@ -2134,6 +2202,25 @@ def contrib_conv2d_winograd_weight_transform(weight, return _make.contrib_conv2d_winograd_weight_transform(weight, tile_size) +def contrib_conv2d_gemm_weight_transform(weights): Review comment: Let us add one assert. It is the same as Winograd. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (c8b7d6d -> d97dcd7)
This is an automated email from the ASF dual-hosted git repository. zhaowu pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from c8b7d6d [MXNET]conv3d and conv3d_transpose addedx (#5814) add d97dcd7 Pin hand landmark network to version 0.7.4. (#5813) No new revisions were added by this update. Summary of changes: tests/python/frontend/tflite/test_forward.py | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
[incubator-tvm] branch master updated (c8b7d6d -> d97dcd7)
This is an automated email from the ASF dual-hosted git repository. zhaowu pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from c8b7d6d [MXNET]conv3d and conv3d_transpose addedx (#5814) add d97dcd7 Pin hand landmark network to version 0.7.4. (#5813) No new revisions were added by this update. Summary of changes: tests/python/frontend/tflite/test_forward.py | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
[GitHub] [incubator-tvm] FrozenGene closed issue #5774: [TFLite] KeyError: 'conv2d/Kernel' on test_forward_mediapipe_hand_landmark()
FrozenGene closed issue #5774: URL: https://github.com/apache/incubator-tvm/issues/5774 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] FrozenGene commented on issue #5774: [TFLite] KeyError: 'conv2d/Kernel' on test_forward_mediapipe_hand_landmark()
FrozenGene commented on issue #5774: URL: https://github.com/apache/incubator-tvm/issues/5774#issuecomment-644184592 > Do we also need another issue to indicate that we don't support this kind of quantized networks and tag it as help needed ? I agree. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] FrozenGene commented on pull request #5813: Pin hand landmark network to version 0.7.4.
FrozenGene commented on pull request #5813: URL: https://github.com/apache/incubator-tvm/pull/5813#issuecomment-644184928 Thanks @leandron This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] FrozenGene merged pull request #5813: Pin hand landmark network to version 0.7.4.
FrozenGene merged pull request #5813: URL: https://github.com/apache/incubator-tvm/pull/5813 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5753: [Draft] Support Module based interface runtime
FrozenGene commented on a change in pull request #5753: URL: https://github.com/apache/incubator-tvm/pull/5753#discussion_r440231237 ## File path: python/tvm/runtime/module.py ## @@ -41,6 +41,10 @@ def __init__(self, handle): self.handle = handle self._entry = None self.entry_name = "__tvm_main__" +# TODO:(FrozenGene): support rpc +if self.type_key == 'GraphRuntimeFactory': +#from tvm.runtime.graph_runtime_factory import GraphRuntimeFactoryModule +self._entry = self.runtime_create Review comment: To support multi models to select, we want to support `lib['default'](ctx)`. So I override `_entry` (used by `__call__`) and `__getitem__`. @tqchen Do you have more elegant ways to do it? Especially for `rpc`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi merged pull request #5814: [MXNET]conv3d and conv3d_transpose addedx
masahi merged pull request #5814: URL: https://github.com/apache/incubator-tvm/pull/5814 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] giuseros commented on pull request #5754: [RFC] Improve quantized convolution performance for armv8 architectures
giuseros commented on pull request #5754: URL: https://github.com/apache/incubator-tvm/pull/5754#issuecomment-644025534 @anijain2305 , thanks for the review! About getting rid of the legalization, I would not do that for now. It is in my backlog to go back to this issue and try to retrieve the strategy from the legalization pass. This should give us more optimization options. If that turns out to be not possible, then yes, I would remove the pass and do everything in the `alter_layout` pass. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] giuseros commented on a change in pull request #5754: [RFC] Improve quantized convolution performance for armv8 architectures
giuseros commented on a change in pull request #5754: URL: https://github.com/apache/incubator-tvm/pull/5754#discussion_r440055355 ## File path: topi/python/topi/arm_cpu/conv2d_alter_op.py ## @@ -235,5 +239,37 @@ def _alter_conv2d_layout(attrs, inputs, tinfos, out_type): new_attrs['out_layout'], out_dtype], topi_tmpl) dispatch_ctx.update(target, new_workload, cfg) return relay.nn.contrib_depthwise_conv2d_nchwc(*inputs, **new_attrs) +if topi_tmpl == "conv2d_NHWC_quantized.arm_cpu": +assert (data.dtype == 'int8' and kernel.dtype == 'int8' or +data.dtype == 'uint8' and kernel.dtype == 'uint8') +CO, IC, KH, KW = get_const_tuple(kernel.shape) + +K = KH * KW * IC +N = CO + +pad_k = 0 +pad_n = 0 + +if N % 4 != 0: +pad_n = 4 - (N % 4) + +if K % 16 != 0: +pad_k = 16 - (K % 16) + +N_padded = N + pad_n +K_padded = K + pad_k + +kernel_expr = relay.nn.contrib_conv2d_gemm_weight_transform(inputs[1]) Review comment: I wasn't able to find any. The idea is similar to Winograd, but winograd uses tiling, while here I am [block transposing and interleaving](https://discuss.tvm.ai/uploads/default/original/2X/6/635d11f9639f849945788694f64346ca2bdcd461.png). We would need the kernel layout only to assert against the wrong one. So yes, if you think it could be useful, I can do that. As for changing the name to `contrib_conv2d_gemm_hwio_weight_transform`, I can easily implement that, but this means that if we want to suport NCHW in the future we will have to change it again (and I think this is why they didn't do that for Winograd) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] siju-samuel opened a new pull request #5814: [MXNET]conv3d and conv3d_transpose addedx
siju-samuel opened a new pull request #5814: URL: https://github.com/apache/incubator-tvm/pull/5814 The following op support are added for mxnet - conv3d - conv3d_transpose @masahi @FrozenGene Please help me to review this PR. TIA Conv3d_Transpose testcase are not added because of the below limitation. `src/operator/nn/deconvolution.cc:105: If not using CUDNN, only 1D or 2D Deconvolution is supported` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] giuseros commented on a change in pull request #5754: [RFC] Improve quantized convolution performance for armv8 architectures
giuseros commented on a change in pull request #5754: URL: https://github.com/apache/incubator-tvm/pull/5754#discussion_r440052506 ## File path: python/tvm/relay/op/nn/nn.py ## @@ -2134,6 +2202,25 @@ def contrib_conv2d_winograd_weight_transform(weight, return _make.contrib_conv2d_winograd_weight_transform(weight, tile_size) +def contrib_conv2d_gemm_weight_transform(weights): Review comment: It doesn't need the layout (for now), but it only supports NHWC (which it assumes). Should I assert if the wrong layout is passed? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] giuseros commented on a change in pull request #5754: [RFC] Improve quantized convolution performance for armv8 architectures
giuseros commented on a change in pull request #5754: URL: https://github.com/apache/incubator-tvm/pull/5754#discussion_r440044546 ## File path: python/tvm/relay/op/nn/nn.py ## @@ -1976,6 +1976,74 @@ def contrib_conv2d_winograd_without_weight_transform(data, kernel_layout, out_layout, out_dtype) +def contrib_conv2d_gemm_without_weight_transform(data, + weight, + strides=(1, 1), + padding=(0, 0), + dilation=(1, 1), + groups=1, + channels=None, + kernel_size=None, + data_layout="NCHW", + kernel_layout="OIHW", + out_layout="", + out_dtype=""): +r"""2D convolution with gemm algorithm. Review comment: All the other help texts are represented as a raw strings (i.e., starting with an "r"). I think this is because raw strings leave backslashes in the string, and the string can be parsed later by help formatter tools. In this particular case, I don't need a raw string, since I don't have any backslash in the help text, so if you wish I can remove it This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] maheshambule commented on pull request #5495: [Relay, Topi] [Frontend][TFLite, MXNet] ReverseSequence operator
maheshambule commented on pull request #5495: URL: https://github.com/apache/incubator-tvm/pull/5495#issuecomment-644008274 @siju-samuel, Thanks. Fixed as per the review comments. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] maheshambule commented on a change in pull request #5495: [Relay, Topi] [Frontend][TFLite, MXNet] ReverseSequence operator
maheshambule commented on a change in pull request #5495: URL: https://github.com/apache/incubator-tvm/pull/5495#discussion_r440038124 ## File path: tests/python/frontend/tflite/test_forward.py ## @@ -901,79 +859,6 @@ def test_all_resize(): if 'RESIZE_NEAREST_NEIGHBOR' in dir(BuiltinOperator()): _test_resize(tf.image.resize_nearest_neighbor, data, align_corners=False) -### -# Range -# - -def _test_range(start, limit, delta): Review comment: Fixed. Got removed during resolving conflicts. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] leandron commented on pull request #5813: Pin hand landmark network to version 0.7.4.
leandron commented on pull request #5813: URL: https://github.com/apache/incubator-tvm/pull/5813#issuecomment-644007448 cc @FrozenGene @tqchen This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] leandron opened a new pull request #5813: Pin hand landmark network to version 0.7.4.
leandron opened a new pull request #5813: URL: https://github.com/apache/incubator-tvm/pull/5813 Versions above 0.7.4 (hand landmark network) are broken due to changes in the quantization operations in the model, which are current not supported by TVM. Fixes #5774. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] leandron commented on issue #5774: [TFLite] KeyError: 'conv2d/Kernel' on test_forward_mediapipe_hand_landmark()
leandron commented on issue #5774: URL: https://github.com/apache/incubator-tvm/issues/5774#issuecomment-643996001 > I think it is nice. @leandron Could you help to make a pull request to lock it in the tag you mention? Sure, will do. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] FrozenGene commented on issue #5774: [TFLite] KeyError: 'conv2d/Kernel' on test_forward_mediapipe_hand_landmark()
FrozenGene commented on issue #5774: URL: https://github.com/apache/incubator-tvm/issues/5774#issuecomment-643995201 > I think this is a good workaround. > > I'd suggest to use a tag on that repository (rather than a commit hash), to make the URL a bit more meaningful: https://github.com/google/mediapipe/blob/v0.7.4/mediapipe/models/hand_landmark.tflite I think it is nice. @leandron Could you help to make a pull request to lock it in the tag you mention? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] leandron commented on issue #5774: [TFLite] KeyError: 'conv2d/Kernel' on test_forward_mediapipe_hand_landmark()
leandron commented on issue #5774: URL: https://github.com/apache/incubator-tvm/issues/5774#issuecomment-643993004 I think this is a good workaround. I'd suggest to use a tag on that repository (rather than a commit hash), to make the URL a bit more meaningful: https://github.com/google/mediapipe/blob/v0.7.4/mediapipe/models/hand_landmark.tflite This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] gussmith23 opened a new pull request #5812: Bring Your Own Datatypes
gussmith23 opened a new pull request #5812: URL: https://github.com/apache/incubator-tvm/pull/5812 Draft of Bring Your Own Datatypes PR. - [X] Rebase onto current TVM - [ ] Better documentation - [ ] More unit tests - [ ] Notebook example This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] moderato opened a new pull request #5811: [AutoTVM][Fix] Fixed 'not in list' error for define_reorder with 'candidate' policy.
moderato opened a new pull request #5811: URL: https://github.com/apache/incubator-tvm/pull/5811 Hello! This is my first time sending a pull-request. I found when the `candidate` policy is chosen for `define_reorder`, AutoTVM throws an error of `xxx not in list`. It's because in https://github.com/apache/incubator-tvm/blob/d0f988988a6f4875d176da1d79628feebf2ee023/python/tvm/autotvm/task/space.py#L313 and other similar places, `x` has the type of `iter_var` while elements in `axes` have the type of `VirtualAxis`. This pull request fixes the bug. A review would be much appreciated! @tqchen @comaniac @merrymercy This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org