[GitHub] [incubator-tvm] tqchen edited a comment on pull request #6450: [WINDOWS][MSVC] Fix MSVC warnings
tqchen edited a comment on pull request #6450: URL: https://github.com/apache/incubator-tvm/pull/6450#issuecomment-690861794 cc @tmoreau89 @rkimball @jroesch This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen opened a new pull request #6450: [WINDOWS][MSVC] Fix MSVC warnings
tqchen opened a new pull request #6450: URL: https://github.com/apache/incubator-tvm/pull/6450 This PR fixes various warnings bought by MSVC. TODO: deprecate `__tvm_main__` symbol and update testcase so windows works as normal. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #6450: [WINDOWS][MSVC] Fix MSVC warnings
tqchen commented on pull request #6450: URL: https://github.com/apache/incubator-tvm/pull/6450#issuecomment-690861794 cc @tmoreau89 @rkimball This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models
masahi edited a comment on pull request #6449: URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-690855865 @kevinthesun Thanks for working on this. Can you split this into multiple PRs? In particular, besides the new op conversion, you made many non trivial changes to existing ops. Without tests for the latter changes, it is hard to tell what they are for. We can merge the new op conversion first (as they came with tests). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models
masahi commented on pull request #6449: URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-690855865 @kevinthesun Thanks for working on this. Can you split this into multiple PRs? In particular, besides the new op conversion, you made many non trivial changes to existng ops. Without tests for the latter changes, it is hard to tell what they are for. We can merge the new op conversion first (as they came with tests). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] areusch commented on pull request #6334: µTVM RPC server and Part 1 of AutoTVM compilation infrastructure
areusch commented on pull request #6334: URL: https://github.com/apache/incubator-tvm/pull/6334#issuecomment-690854878 @tqchen @liangfu please take another look when you have a minute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models
masahi commented on pull request #6449: URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-690846159 cc @siju-samuel @t-vi This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi commented on a change in pull request #6446: [ONNX] Add support for GatherElements conversion
masahi commented on a change in pull request #6446: URL: https://github.com/apache/incubator-tvm/pull/6446#discussion_r486745778 ## File path: tests/python/frontend/onnx/test_forward.py ## @@ -425,6 +425,45 @@ def test_gather(): verify_gather((4, 3, 5, 6), [[2, 1, 0, 0]], 0, 'float32') +def verify_gatherelements(in_shape, indices, axis): +x = np.random.uniform(size=in_shape).astype("float32") +indices = np.array(indices, dtype="int32") +print(x.shape) +print(indices.shape) + Review comment: oops thanks, fixed This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #6446: [ONNX] Add support for GatherElements conversion
siju-samuel commented on a change in pull request #6446: URL: https://github.com/apache/incubator-tvm/pull/6446#discussion_r486744994 ## File path: tests/python/frontend/onnx/test_forward.py ## @@ -425,6 +425,45 @@ def test_gather(): verify_gather((4, 3, 5, 6), [[2, 1, 0, 0]], 0, 'float32') +def verify_gatherelements(in_shape, indices, axis): +x = np.random.uniform(size=in_shape).astype("float32") +indices = np.array(indices, dtype="int32") +print(x.shape) +print(indices.shape) + Review comment: Remove this prints This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] Beya2019 commented on a change in pull request #6443: [RELAY][OP] roi_align operator alter layout
Beya2019 commented on a change in pull request #6443: URL: https://github.com/apache/incubator-tvm/pull/6443#discussion_r486742150 ## File path: python/tvm/relay/op/vision/rcnn.py ## @@ -24,7 +24,7 @@ def roi_align(data, rois, pooled_size, spatial_scale, sample_ratio=-1, layout='N Parameters -- data : relay.Expr -4-D tensor with shape [batch, channel, height, width] +4-D tensor with shape [batch, channel, height, width] or [batch, height, width, channel] Review comment: @kevinthesun and @anijain2305, In addition, I would like to add one more point. In our own target, this implementation method really affect our performance for it add many layout_transpose operator(Our convolution only supports nhwc, if roi_align only supports nchw, it means that many additional layout_transform operators need to be inserted into the network), I also think really need nhwc layout implementation for this op considering performance. And this submission is needed for layout convert support whether nhwc layout realized or not. The two mentioned function are not conflicting but complementary, I also hope the nhwc layout topi implementation as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] Beya2019 commented on a change in pull request #6443: [RELAY][OP] roi_align operator alter layout
Beya2019 commented on a change in pull request #6443: URL: https://github.com/apache/incubator-tvm/pull/6443#discussion_r486734730 ## File path: python/tvm/relay/op/vision/rcnn.py ## @@ -24,7 +24,7 @@ def roi_align(data, rois, pooled_size, spatial_scale, sample_ratio=-1, layout='N Parameters -- data : relay.Expr -4-D tensor with shape [batch, channel, height, width] +4-D tensor with shape [batch, channel, height, width] or [batch, height, width, channel] Review comment: Hi @kevinthesun and @anijain2305, This NHWC layout topi implementation for roi_align is really not currently supported for traditional TVM targets. But in relay/frontend/mxnet.py the _mx_roi_align operator is fixed to NCHW(new_attrs["layout"] = "NCHW"), that is only the nchw topi implemention process is called normally. For the special target which is only support nhwc layout, we can add extra code in mxnet.py(or other frontend py) as follows(RFC:https://tvm.apache.org/docs/dev/convert_layout.html#usage): ``` desired_layouts = {'vision.roi_align': ['NHWC', `default`]} # Convert the layout to NCHW # RemoveUnunsedFunctions is used to clean up the graph. seq = tvm.transform.Sequential([relay.transform.RemoveUnusedFunctions(), relay.transform.ConvertLayout(desired_layouts)]) with tvm.transform.PassContext(opt_level=3): mod = seq(mod) ``` So, it doesn't cause the compilation failure for the nhwc topi implementation process can not be called for traditional TVM targets. but it expanded the nhwc relay ir implement with their own backend code for the special target which is only support nhwc layout. Certainly, the nhwc layout topi implementation is to be realized sooner or later, I think it can be improved gradually in the future. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] kevinthesun opened a new pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models
kevinthesun opened a new pull request #6449: URL: https://github.com/apache/incubator-tvm/pull/6449 Some necessary improvements for pytorch od models. @zhiics @yongwww @masahi This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] jroesch opened a new pull request #6448: [Format] Convert all Python code w/o CI
jroesch opened a new pull request #6448: URL: https://github.com/apache/incubator-tvm/pull/6448 cc @tqchen this one applies formatting without CI machinery. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] jroesch commented on pull request #6437: [RFC][Formatting] Add scripts for applying Black to the Python code.
jroesch commented on pull request #6437: URL: https://github.com/apache/incubator-tvm/pull/6437#issuecomment-690821576 @tqchen recommended that we first format the entire code base using these settings then try to land the CI parts, going to open a second PR with the fully formatted repo. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen merged pull request #6444: CUDA: broaden path detection
tqchen merged pull request #6444: URL: https://github.com/apache/incubator-tvm/pull/6444 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (ecba2f3 -> 355720e)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from ecba2f3 [Relay][Op] Fix Reshape Compute (#6396) add 355720e CUDA: broaden path detection (#6444) No new revisions were added by this update. Summary of changes: python/tvm/contrib/nvcc.py | 8 +++- 1 file changed, 7 insertions(+), 1 deletion(-)
[incubator-tvm] branch master updated (aeef16d -> ecba2f3)
This is an automated email from the ASF dual-hosted git repository. kevinthesun pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from aeef16d [QNN][Relay] Fixed bug in quantized conv2d. (#6420) add ecba2f3 [Relay][Op] Fix Reshape Compute (#6396) No new revisions were added by this update. Summary of changes: src/relay/op/tensor/transform.cc | 85 ++-- src/relay/op/tensor/transform.h | 9 + tests/python/relay/test_any.py | 27 + 3 files changed, 91 insertions(+), 30 deletions(-)
[GitHub] [incubator-tvm] kevinthesun merged pull request #6396: [Relay][Op] Fix Reshape Compute
kevinthesun merged pull request #6396: URL: https://github.com/apache/incubator-tvm/pull/6396 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] kevinthesun commented on pull request #6396: [Relay][Op] Fix Reshape Compute
kevinthesun commented on pull request #6396: URL: https://github.com/apache/incubator-tvm/pull/6396#issuecomment-690805570 Thanks @mbrookhart @zhiics @tqchen @electriclilies This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6443: [RELAY][OP] roi_align operator alter layout
kevinthesun commented on a change in pull request #6443: URL: https://github.com/apache/incubator-tvm/pull/6443#discussion_r486679507 ## File path: python/tvm/relay/op/vision/rcnn.py ## @@ -24,7 +24,7 @@ def roi_align(data, rois, pooled_size, spatial_scale, sample_ratio=-1, layout='N Parameters -- data : relay.Expr -4-D tensor with shape [batch, channel, height, width] +4-D tensor with shape [batch, channel, height, width] or [batch, height, width, channel] Review comment: Do we really need another layout for this op? Layout won't affect performance much for it? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6443: [RELAY][OP] roi_align operator alter layout
kevinthesun commented on a change in pull request #6443: URL: https://github.com/apache/incubator-tvm/pull/6443#discussion_r486679507 ## File path: python/tvm/relay/op/vision/rcnn.py ## @@ -24,7 +24,7 @@ def roi_align(data, rois, pooled_size, spatial_scale, sample_ratio=-1, layout='N Parameters -- data : relay.Expr -4-D tensor with shape [batch, channel, height, width] +4-D tensor with shape [batch, channel, height, width] or [batch, height, width, channel] Review comment: Do we really need another layout for this op, since layout won't affect performance much for it? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6443: [RELAY][OP] roi_align operator alter layout
kevinthesun commented on a change in pull request #6443: URL: https://github.com/apache/incubator-tvm/pull/6443#discussion_r486679507 ## File path: python/tvm/relay/op/vision/rcnn.py ## @@ -24,7 +24,7 @@ def roi_align(data, rois, pooled_size, spatial_scale, sample_ratio=-1, layout='N Parameters -- data : relay.Expr -4-D tensor with shape [batch, channel, height, width] +4-D tensor with shape [batch, channel, height, width] or [batch, height, width, channel] Review comment: Do we really need another layout for this op? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] kevinthesun commented on pull request #6443: [RELAY][OP] roi_align operator alter layout
kevinthesun commented on pull request #6443: URL: https://github.com/apache/incubator-tvm/pull/6443#issuecomment-690774753 Yeah. As @anijain2305 mentioned, we only have ```roi_align_nchw``` in topi. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] t-vi commented on pull request #6447: ROCm: use GcnArch for mcpu and ApiVersion to select code object version
t-vi commented on pull request #6447: URL: https://github.com/apache/incubator-tvm/pull/6447#issuecomment-690731565 @junrushao1994 Yeah, and I'll admit that I wouldn't have spotted it without trying to run it on the hardware that is particular about it... I actually hope that we might be able to drop the code-object thing in due time, but so 3.5 (which changed this) was released in June and I'm not sure how long we would need to give people to upgrade ROCm. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6437: [RFC][Formatting] Add scripts for applying Black to the Python code.
comaniac commented on a change in pull request #6437: URL: https://github.com/apache/incubator-tvm/pull/6437#discussion_r486620292 ## File path: tests/lint/git-black.sh ## @@ -0,0 +1,72 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +set -e +set -u +set -o pipefail + +if [[ "$1" == "-i" ]]; then +INPLACE_FORMAT=1 +shift 1 +else +INPLACE_FORMAT=0 +fi + +if [[ "$#" -lt 1 ]]; then +echo "Usage: tests/lint/git-black.sh [-i] " +echo "" +echo "Run black-format on files that changed since " +echo "Examples:" +echo "- Compare last one commit: tests/lint/git-black.sh HEAD~1" +echo "- Compare against upstream/master: tests/lint/git-black.sh upstream/master" +echo "You can also add -i option to do inplace format" +exit 1 +fi + +cleanup() +{ + rm -rf /tmp/$$.black-format.txt +} +trap cleanup 0 + + +if [ -x "$(command -v black)" ]; then +BLACK=black +else +echo "Cannot find black" +exit 1 +fi + +# Print out specific version + +echo "Version Information: $(${BLACK} --version)" Review comment: Seems like `BLACK` is only used for displaying the version. If so, how about move this line to the if-block so that we can get rid of `BLACK` at all. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] junrushao1994 edited a comment on pull request #6447: ROCm: use GcnArch for mcpu and ApiVersion to select code object version
junrushao1994 edited a comment on pull request #6447: URL: https://github.com/apache/incubator-tvm/pull/6447#issuecomment-690711679 I see. So there are actually two version, one is ApiVersion, which is used to decide whether to add "-code-object-v3" into mattr; Another is GcnArch, which is used to decode "mcpu". Thank you for the clarification! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6447: ROCm: use GcnArch for mcpu and ApiVersion to select code object version
junrushao1994 commented on pull request #6447: URL: https://github.com/apache/incubator-tvm/pull/6447#issuecomment-690711679 I see. So there are actually two version, one ApiVersion, which is used to decide whether to add "-code-object-v3" into mattr; Another is GcnArch, which is used to decode "mcpu". Thank you for the clarification! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] t-vi opened a new pull request #6447: ROCm: use GcnArch for mcpu and ApiVersion to select code object version
t-vi opened a new pull request #6447: URL: https://github.com/apache/incubator-tvm/pull/6447 This is a ROCm followup for #6347 bringing the code that was moved from src/target/llvm/codegen_amdgpu.cc to src/target/target_kind.cc closer to the old mechanism for compute arch autodetection. @junrushao1994 @masahi This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi opened a new pull request #6446: [ONNX] Add support for GatherElements conversion
masahi opened a new pull request #6446: URL: https://github.com/apache/incubator-tvm/pull/6446 https://github.com/onnx/onnx/blob/master/docs/Operators.md#GatherElements This is required to convert decision trees from hummingbird to Relay. please review @siju-samuel @jwfromm @mbrookhart This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] t-vi commented on pull request #6347: [Target][Codegen] Use target class in all codegens
t-vi commented on pull request #6347: URL: https://github.com/apache/incubator-tvm/pull/6347#issuecomment-690705823 I'll just send a PR in a minute or so. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6347: [Target][Codegen] Use target class in all codegens
junrushao1994 commented on pull request #6347: URL: https://github.com/apache/incubator-tvm/pull/6347#issuecomment-690703438 @t-vi I see. We should change this line: https://github.com/apache/incubator-tvm/blob/master/src/target/target_kind.cc#L176, from `runtime::kApiVersion` to `runtime::kGcnArch `. Is that correct>? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] junrushao1994 edited a comment on pull request #6347: [Target][Codegen] Use target class in all codegens
junrushao1994 edited a comment on pull request #6347: URL: https://github.com/apache/incubator-tvm/pull/6347#issuecomment-690703438 @t-vi I see. We should change this line: https://github.com/apache/incubator-tvm/blob/master/src/target/target_kind.cc#L176, from `runtime::kApiVersion` to `runtime::kGcnArch `. Is that correct? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6347: [Target][Codegen] Use target class in all codegens
junrushao1994 commented on pull request #6347: URL: https://github.com/apache/incubator-tvm/pull/6347#issuecomment-690699114 @t-vi Yes, the detection logic used previously in amdgpu codegen is here: https://github.com/apache/incubator-tvm/blob/e5b793f39fd5b4f84b0aedf06aa376ebe45cf2bc/src/target/llvm/codegen_amdgpu.cc#L194. Then I moved the logic to the target constructor to reveal it at earliest stage. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] t-vi commented on pull request #6347: [Target][Codegen] Use target class in all codegens
t-vi commented on pull request #6347: URL: https://github.com/apache/incubator-tvm/pull/6347#issuecomment-690694805 The ROCm default detection seems to have been mangled to confuse ROCm version (software) with compute arch (hardware, e.g. gfx). I'll try to fix it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] jroesch commented on pull request #6437: [RFC][Formatting] Add scripts for applying Black to the Python code.
jroesch commented on pull request #6437: URL: https://github.com/apache/incubator-tvm/pull/6437#issuecomment-690676307 @junrushao1994 @comaniac @areusch I just added the scripts and cleaned some things up, take another pass if you can This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] anijain2305 commented on pull request #6430: [ConvertLayout] Use a packed function to decide layout based on operator attributes
anijain2305 commented on pull request #6430: URL: https://github.com/apache/incubator-tvm/pull/6430#issuecomment-690618685 @tqchen Do you have any suggestions here? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] lhutton1 commented on pull request #6430: [ConvertLayout] Use a packed function to decide layout based on operator attributes
lhutton1 commented on pull request #6430: URL: https://github.com/apache/incubator-tvm/pull/6430#issuecomment-690617359 Yep that's correct. I think the current PR has one advantage being that a "custom" layout is explicitly defined and passed to convert layout as opposed to overriding a function that may seem unrelated to the convert layout pass as first glance. However, I suppose there is also a question as to whether we want to duplicate functionality that can already be made use of. Is there a reason the components of the operator registry were never exposed to C++? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] jroesch commented on a change in pull request #6437: [RFC][Formatting] Apply black to entire Python code base.
jroesch commented on a change in pull request #6437: URL: https://github.com/apache/incubator-tvm/pull/6437#discussion_r486554772 ## File path: pyproject.toml ## @@ -0,0 +1,29 @@ +[tool.black] +line-length = 88 Review comment: @tqchen confirmed this should be 100. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] anijain2305 commented on pull request #6430: [ConvertLayout] Use a packed function to decide layout based on operator attributes
anijain2305 commented on pull request #6430: URL: https://github.com/apache/incubator-tvm/pull/6430#issuecomment-690600943 Yes, that makes sense. Thanks for explaining! Yes, if we want to go through something like `TempOpAttr`, we need that functionality in C++. Is your current PR implementation easier or more scalable then? I think in your current PR, you can define the TVMPackedFunc counterpart in the `PreProcessModule` C++ itself? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] jroesch commented on a change in pull request #6437: [RFC][Formatting] Apply black to entire Python code base.
jroesch commented on a change in pull request #6437: URL: https://github.com/apache/incubator-tvm/pull/6437#discussion_r486547524 ## File path: tests/lint/git-black.sh ## @@ -0,0 +1,76 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +set -e +set -u +set -o pipefail + +if [[ "$1" == "-i" ]]; then +INPLACE_FORMAT=1 +shift 1 +else +INPLACE_FORMAT=0 +fi + +if [[ "$#" -lt 1 ]]; then +echo "Usage: tests/lint/git-black.sh [-i] " +echo "" +echo "Run black-format on files that changed since " +echo "Examples:" +echo "- Compare last one commit: tests/lint/git-black.sh HEAD~1" +echo "- Compare against upstream/master: tests/lint/git-black.sh upstream/master" +echo "You can also add -i option to do inplace format" +exit 1 +fi + +cleanup() +{ + rm -rf /tmp/$$.black-format.txt +} +trap cleanup 0 + + +if [ -x "$(command -v black)" ]; then +BLACK=black +else +echo "Cannot find black" +exit 1 +fi + +# Print out specific version +${BLACK} --version + +FILES=$(git diff --name-only HEAD $1 -- "*.py" "*.pyi" | tr '\n' ' ') Review comment: honestly I hate bash and struggle through as much as possible will defer to you This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] jroesch commented on pull request #6437: [RFC][Formatting] Apply black to entire Python code base.
jroesch commented on pull request #6437: URL: https://github.com/apache/incubator-tvm/pull/6437#issuecomment-690594159 @areusch @tqchen @comaniac I can rollback the formatting, the first 3 or 4 commits were focused on formatting then I went through the process to see if it would actually work. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] comaniac commented on pull request #6302: [tvmc] command line driver 'compile' (part 2/4)
comaniac commented on pull request #6302: URL: https://github.com/apache/incubator-tvm/pull/6302#issuecomment-690587908 cc @masahi @junrushao1994 @tqchen for a final review. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6302: [tvmc] command line driver 'compile' (part 2/4)
comaniac commented on a change in pull request #6302: URL: https://github.com/apache/incubator-tvm/pull/6302#discussion_r486517802 ## File path: python/tvm/driver/tvmc/compiler.py ## @@ -0,0 +1,280 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +""" +Provides support to compile networks both AOT and JIT. +""" +import logging +import os.path +import tarfile +from pathlib import Path + +import tvm +from tvm import autotvm +from tvm import relay +from tvm.contrib import cc +from tvm.contrib import util + +from . import common, frontends +from .main import register_parser + + +@register_parser +def add_compile_parser(subparsers): +""" Include parser for 'compile' subcommand """ + +parser = subparsers.add_parser("compile", help="compile a model") +parser.set_defaults(func=drive_compile) +parser.add_argument( +"--cross-compiler", +default="", +help="the cross compiler to generate target libraries, e.g. 'aarch64-linux-gnu-gcc'", +) +parser.add_argument( +"--desired-layout", +choices=["NCHW", "NHWC"], +default=None, +help="change the data layout of the whole graph", +) +parser.add_argument( +"--dump-code", +metavar="FORMAT", +default="", +help="comma separarated list of formats to export, e.g. 'asm,ll,relay' " +) +parser.add_argument( +"--model-format", +choices=frontends.get_frontend_names(), +help="specify input model format", +) +parser.add_argument( +"-o", +"--output", +default="module.tar", +help="output the compiled module to an archive", +) +parser.add_argument( +"--target", +help="compilation target as plain string, inline JSON or path to a JSON file", +required=True +) +parser.add_argument( +"--tuning-records", +metavar="PATH", +default="", +help="path to an auto-tuning log file from AutoTVM" Review comment: ```suggestion help="path to an auto-tuning log file by AutoTVM. If not presented, the fallback/tophub configs will be used" ``` ## File path: python/tvm/driver/tvmc/compiler.py ## @@ -0,0 +1,280 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +""" +Provides support to compile networks both AOT and JIT. +""" +import logging +import os.path +import tarfile +from pathlib import Path + +import tvm +from tvm import autotvm +from tvm import relay +from tvm.contrib import cc +from tvm.contrib import util + +from . import common, frontends +from .main import register_parser + + +@register_parser +def add_compile_parser(subparsers): +""" Include parser for 'compile' subcommand """ + +parser = subparsers.add_parser("compile", help="compile a model") +parser.set_defaults(func=drive_compile) +parser.add_argument( +"--cross-compiler", +default="", +help="the cross compiler to generate target libraries, e.g. 'aarch64-linux-gnu-gcc'", +) +parser.add_argument( +"--desired-layout", +choices=["NCHW", "NHWC"], +default=None, +help="change the data layout of the whole graph", +) +parser.add_argument( +"--dump-code", +metavar="FORMAT", +default="", +help="comma separarated list of formats to export, e.g. 'asm,ll,relay' " +) +parser.add_argument( +"--model-format", +
[GitHub] [incubator-tvm] lhutton1 commented on pull request #6430: [ConvertLayout] Use a packed function to decide layout based on operator attributes
lhutton1 commented on pull request #6430: URL: https://github.com/apache/incubator-tvm/pull/6430#issuecomment-690584984 So since convert layout is part of preprocessing the module in the optimize step before codegen (https://github.com/apache/incubator-tvm/blob/master/src/relay/backend/contrib/arm_compute_lib/codegen.cc#L342) we could do this: ``` with TempOpAttr("nn.conv2d", "FTVMConvertOpLayout", ...): graph, lib, parmas = tvm.build(...) ``` However it's not good expect the user to know how to add this. Therefore, my plan was to set the attribute for the scope of `PreProcessModule` https://github.com/apache/incubator-tvm/blob/master/src/relay/backend/contrib/arm_compute_lib/codegen.cc#L340. Hope that makes more sense? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] areusch commented on a change in pull request #6437: [RFC][Formatting] Apply black to entire Python code base.
areusch commented on a change in pull request #6437: URL: https://github.com/apache/incubator-tvm/pull/6437#discussion_r486529234 ## File path: tests/lint/git-black.sh ## @@ -0,0 +1,76 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +set -e +set -u +set -o pipefail + +if [[ "$1" == "-i" ]]; then +INPLACE_FORMAT=1 +shift 1 +else +INPLACE_FORMAT=0 +fi + +if [[ "$#" -lt 1 ]]; then +echo "Usage: tests/lint/git-black.sh [-i] " +echo "" +echo "Run black-format on files that changed since " +echo "Examples:" +echo "- Compare last one commit: tests/lint/git-black.sh HEAD~1" +echo "- Compare against upstream/master: tests/lint/git-black.sh upstream/master" +echo "You can also add -i option to do inplace format" +exit 1 +fi + +cleanup() +{ + rm -rf /tmp/$$.black-format.txt +} +trap cleanup 0 + + +if [ -x "$(command -v black)" ]; then +BLACK=black +else +echo "Cannot find black" +exit 1 +fi + +# Print out specific version +${BLACK} --version + +FILES=$(git diff --name-only HEAD $1 -- "*.py" "*.pyi" | tr '\n' ' ') Review comment: can we use array style here? `IFS=$'\n' read -a FILES -d'\n' < <(git diff --name-only HEAD $1 -- "*.py" "*.pyi")` `CMD=( "black" "--check" "${FILES[@]}" )` `"${CMD[@]}"` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] anijain2305 merged pull request #6420: [QNN][Relay] Fixed bug in quantized conv2d.
anijain2305 merged pull request #6420: URL: https://github.com/apache/incubator-tvm/pull/6420 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (b81bdee -> aeef16d)
This is an automated email from the ASF dual-hosted git repository. anijain2305 pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from b81bdee [Relay] Add Defunctionalization Pass (#6400) add aeef16d [QNN][Relay] Fixed bug in quantized conv2d. (#6420) No new revisions were added by this update. Summary of changes: src/relay/qnn/op/convolution.cc | 23 +-- tests/python/relay/test_op_qnn_conv2d.py | 28 2 files changed, 49 insertions(+), 2 deletions(-)
[GitHub] [incubator-tvm] anijain2305 commented on pull request #6420: [QNN][Relay] Fixed bug in quantized conv2d.
anijain2305 commented on pull request #6420: URL: https://github.com/apache/incubator-tvm/pull/6420#issuecomment-690543418 Thanks @jainris This is merged! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6437: [RFC][Formatting] Apply black to entire Python code base.
comaniac commented on a change in pull request #6437: URL: https://github.com/apache/incubator-tvm/pull/6437#discussion_r486506436 ## File path: tests/lint/git-black.sh ## @@ -0,0 +1,76 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +set -e +set -u +set -o pipefail + +if [[ "$1" == "-i" ]]; then +INPLACE_FORMAT=1 +shift 1 +else +INPLACE_FORMAT=0 +fi + +if [[ "$#" -lt 1 ]]; then +echo "Usage: tests/lint/git-black.sh [-i] " +echo "" +echo "Run black-format on files that changed since " Review comment: ```suggestion echo "Run black-format on Python files that changed since " ``` ## File path: tests/lint/git-black.sh ## @@ -0,0 +1,76 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +set -e +set -u +set -o pipefail + +if [[ "$1" == "-i" ]]; then +INPLACE_FORMAT=1 +shift 1 +else +INPLACE_FORMAT=0 +fi + +if [[ "$#" -lt 1 ]]; then +echo "Usage: tests/lint/git-black.sh [-i] " +echo "" +echo "Run black-format on files that changed since " +echo "Examples:" +echo "- Compare last one commit: tests/lint/git-black.sh HEAD~1" +echo "- Compare against upstream/master: tests/lint/git-black.sh upstream/master" +echo "You can also add -i option to do inplace format" +exit 1 +fi + +cleanup() +{ + rm -rf /tmp/$$.black-format.txt +} +trap cleanup 0 + + +if [ -x "$(command -v black)" ]; then +BLACK=black +else +echo "Cannot find black" +exit 1 +fi + +# Print out specific version +${BLACK} --version + +FILES=$(git diff --name-only HEAD $1 -- "*.py" "*.pyi" | tr '\n' ' ') +echo $FILES + +if [[ ${INPLACE_FORMAT} -eq 1 ]]; then +echo "Running black in place on your working tree." $1 +CMD="black --check $FILES" +echo $CMD +eval $CMD +exit 0 +fi + +echo "Running git-black against" $1 Review comment: ```suggestion echo "Running git-black on Python files against" $1 ``` ## File path: pyproject.toml ## @@ -0,0 +1,29 @@ +[tool.black] +line-length = 88 Review comment: So we are using 88 as the TVM standard column length for Python codes? I know this is the default and the suggested value in black so just want to double confirm. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] t-vi closed pull request #6440: [ROCm] include mcpu and mtriple as target options
t-vi closed pull request #6440: URL: https://github.com/apache/incubator-tvm/pull/6440 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] t-vi commented on pull request #6440: [ROCm] include mcpu and mtriple as target options
t-vi commented on pull request #6440: URL: https://github.com/apache/incubator-tvm/pull/6440#issuecomment-690536150 @junrushao1994 Right, thank you! @masahi @tqchen sorry for the noise. I had searched for rocm/mcpu patches but missed the more general one. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] anijain2305 commented on pull request #6443: [RELAY][OP] roi_align operator alter layout
anijain2305 commented on pull request #6443: URL: https://github.com/apache/incubator-tvm/pull/6443#issuecomment-690523605 Is there a `NHWC` topi implementation for `roi_align`? If not, this will cause compilation failure for traditional TVM targets. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (e6374dc -> b81bdee)
This is an automated email from the ASF dual-hosted git repository. marisa pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from e6374dc Fix broadcast shape (#6422) add b81bdee [Relay] Add Defunctionalization Pass (#6400) No new revisions were added by this update. Summary of changes: python/tvm/relay/transform/transform.py| 26 ++ src/relay/transforms/defunctionalization.cc| 431 + .../python/relay/test_pass_defunctionalization.py | 226 +++ 3 files changed, 683 insertions(+) create mode 100644 src/relay/transforms/defunctionalization.cc create mode 100644 tests/python/relay/test_pass_defunctionalization.py
[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6437: [RFC][Formatting] Apply black to entire Python code base.
junrushao1994 commented on pull request #6437: URL: https://github.com/apache/incubator-tvm/pull/6437#issuecomment-690520497 It is a bit hard to review 1000 files...maybe just take a look at the pyproject.toml file and assume other parts are correct? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] MarisaKirisame merged pull request #6400: [Relay] Add Defunctionalization Pass
MarisaKirisame merged pull request #6400: URL: https://github.com/apache/incubator-tvm/pull/6400 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] MarisaKirisame commented on pull request #6400: [Relay] Add Defunctionalization Pass
MarisaKirisame commented on pull request #6400: URL: https://github.com/apache/incubator-tvm/pull/6400#issuecomment-690407237 Thx @hypercubestart @yzhliu . This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] giuseros opened a new pull request #6445: Add dot product support for quantized convolution.
giuseros opened a new pull request #6445: URL: https://github.com/apache/incubator-tvm/pull/6445 ### High level description of the submission We added two new intrinsics in: `topi/arm_cpu/tensor_intrin.py`, namely - `mmla4x4`: compute a matrix multiplication between tile `A(4,4)` and tile `B(4,4)` - `mmla16x4`: compute a matrix multiplication between tile `A(rows,4)` and tile `B(4,16)` Then we used those intrinsics in two separate strategies. We added the strategies in `topi/arm_cpu/conv2d_int8.py` and implemented the schedules in `topi/arm_cpu/conv2d_gemm.py`. In particular: - `schedule_conv2d_gemm`, when accelerated, packs matrix `A`, compute GEMM, and unpack the resulting matrix. This uses the `mmla4x4` intrinsic - `schedule_conv2d_gemm_hybrid` doesn't do any packing on `A` and `C` which are in native form. This uses the `mmla16x4` intrinsic Please note that for the limitations of `tensorize` we need to pad matrix `A` in both cases (when dimensions are not multiple of the tiling shape) ### RFC This PR is based on the following RFC: https://discuss.tvm.apache.org/t/rfc-accelerate-quantized-convolution-through-dot-product/7873 Change-Id: Id0d818d84ffc458c6dad7983fd350a0f3d5db395 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] anijain2305 edited a comment on pull request #6430: [ConvertLayout] Use a packed function to decide layout based on operator attributes
anijain2305 edited a comment on pull request #6430: URL: https://github.com/apache/incubator-tvm/pull/6430#issuecomment-690504454 I think calling ConvertLayout in C++ should be ok. The thing that I don't fully understand is why do we need to set the `FTVMConvertOpLayout` in C++? Is there a way to do it in python? If there is a high level python API call, we can wrap that with `with TempOpAttr`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] anijain2305 commented on pull request #6430: [ConvertLayout] Use a packed function to decide layout based on operator attributes
anijain2305 commented on pull request #6430: URL: https://github.com/apache/incubator-tvm/pull/6430#issuecomment-690504454 I think calling ConvertLayout in C++ should be ok. The thing that I don't fully understand is why do we need to set the `FTVMConvertOpLayout` in C++? Is there a way to do it in python? If there is a high level python API call, we can wrap that with `with TempOpAttr` and that should overwrite the registry in the `with` scope. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] lhutton1 commented on pull request #6430: [ConvertLayout] Use a packed function to decide layout based on operator attributes
lhutton1 commented on pull request #6430: URL: https://github.com/apache/incubator-tvm/pull/6430#issuecomment-690476905 Thanks for the pointer @anijain2305. I do agree this is a better approach. I have an example working on my end for my use-case, although it seems quite messy. Setting temporary attributes i.e. `FTVMConvertOpLayout` on the C++ side of things is more difficult than from python. The reason I need to do this in C++ is because I run the convert layout pass in Arm Compute Library codegen. I've had a go at implementing something like the `TempOpAttr` class from python in C++. This is to ensure I'm only setting this config for the Arm Compute Library codegen. Although, this involves fetching a series of packed functions (namely OpGetAttr, OpResetAttr and OpSetAttr) from `src/ir/op.cc` which doesn't sound right. I'm just wondering if it sounds like I'm completely off piste with this or whether you know of anything that could help that I've missed? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6440: [ROCm] include mcpu and mtriple as target options
junrushao1994 commented on pull request #6440: URL: https://github.com/apache/incubator-tvm/pull/6440#issuecomment-690464393 Hmmm I think the issue has been fixed by #6369 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #6440: [ROCm] include mcpu and mtriple as target options
tqchen commented on pull request #6440: URL: https://github.com/apache/incubator-tvm/pull/6440#issuecomment-690456262 @t-vi please rebase against the master to resolve conflicts This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #6396: [Relay][Op] Fix Reshape Compute
tqchen commented on pull request #6396: URL: https://github.com/apache/incubator-tvm/pull/6396#issuecomment-690413648 @kevinthesun please rebase against the master This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] zhiics merged pull request #6422: Fix broadcast shape
zhiics merged pull request #6422: URL: https://github.com/apache/incubator-tvm/pull/6422 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] zhiics commented on pull request #6422: Fix broadcast shape
zhiics commented on pull request #6422: URL: https://github.com/apache/incubator-tvm/pull/6422#issuecomment-690386154 Thanks @kevinthesun @jroesch @mbrookhart @electriclilies @icemelon9 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (6b6661e -> e6374dc)
This is an automated email from the ASF dual-hosted git repository. zhic pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from 6b6661e [Target] Tags, Composite Target, Unified Interface (#6369) add e6374dc Fix broadcast shape (#6422) No new revisions were added by this update. Summary of changes: include/tvm/topi/detail/broadcast.h | 10 +- tests/python/relay/test_any.py | 28 2 files changed, 33 insertions(+), 5 deletions(-)
[GitHub] [incubator-tvm] leandron edited a comment on pull request #6302: [tvmc] command line driver 'compile' (part 2/4)
leandron edited a comment on pull request #6302: URL: https://github.com/apache/incubator-tvm/pull/6302#issuecomment-690379907 There was an issue we discovered with `ConvertLayout` when running the new tests introduced here, once #6442 is merged, all the tests should pass. @comaniac would you mind having another look into this PR? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] leandron commented on pull request #6302: [tvmc] command line driver 'compile' (part 2/4)
leandron commented on pull request #6302: URL: https://github.com/apache/incubator-tvm/pull/6302#issuecomment-690379907 There was an issue we discovered with ConvertLayout when running the new tests introduced here, once #6442 is merged, all the tests should pass. @comaniac would you mind having another look into this PR? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #6369: [Target] Target Tags, Composite Target and Unified Interface
tqchen commented on pull request #6369: URL: https://github.com/apache/incubator-tvm/pull/6369#issuecomment-690375070 Thanks @junrushao1994 @comaniac This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen merged pull request #6369: [Target] Target Tags, Composite Target and Unified Interface
tqchen merged pull request #6369: URL: https://github.com/apache/incubator-tvm/pull/6369 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (b05aa96 -> 6b6661e)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from b05aa96 [Rust] Improve the error reporting in build.rs files by using anyhow. (#6401) add 6b6661e [Target] Tags, Composite Target, Unified Interface (#6369) No new revisions were added by this update. Summary of changes: apps/benchmark/gpu_imagenet_bench.py | 2 +- apps/topi_recipe/conv/test_conv_int8_arm.py| 2 +- apps/topi_recipe/conv/test_conv_int8_intel.py | 2 +- apps/topi_recipe/gemm/gemm_int8.py | 2 +- include/tvm/target/tag.h | 155 include/tvm/target/target.h| 110 +-- include/tvm/target/target_kind.h | 94 ++- python/tvm/autotvm/feature.py | 13 +- python/tvm/autotvm/graph_tuner/base_graph_tuner.py | 68 +- python/tvm/autotvm/measure/measure_methods.py | 32 +- python/tvm/autotvm/record.py | 33 +- python/tvm/autotvm/task/dispatcher.py | 8 +- python/tvm/autotvm/task/task.py| 50 +- python/tvm/autotvm/task/topi_integration.py| 9 +- python/tvm/autotvm/tophub.py | 22 +- python/tvm/contrib/peak.py | 22 +- python/tvm/driver/build_module.py | 14 +- python/tvm/relay/backend/compile_engine.py | 11 +- python/tvm/relay/backend/graph_runtime_codegen.py | 11 +- python/tvm/relay/backend/vm.py | 12 +- python/tvm/relay/build_module.py | 32 +- python/tvm/relay/testing/py_converter.py | 4 +- python/tvm/target/__init__.py | 3 +- python/tvm/target/codegen.py | 4 +- python/tvm/target/intrin.py| 5 +- python/tvm/target/tag.py | 78 ++ python/tvm/target/target.py| 149 ++-- python/tvm/te/hybrid/calls.py | 6 +- python/tvm/te/hybrid/runtime.py| 67 +- python/tvm/topi/cuda/conv2d_hwnc_tensorcore.py | 8 +- python/tvm/topi/cuda/softmax.py| 5 +- python/tvm/topi/generic/__init__.py| 2 +- python/tvm/topi/testing/common.py | 2 +- rust/tvm/examples/resnet/src/build_resnet.py | 2 +- src/auto_scheduler/measure_record.cc | 2 +- src/driver/driver_api.cc | 8 +- src/relay/backend/build_module.cc | 8 +- src/relay/backend/compile_engine.cc| 2 +- src/relay/backend/graph_runtime_codegen.cc | 2 +- src/relay/backend/interpreter.cc | 2 +- src/relay/backend/vm/compiler.cc | 8 +- src/relay/transforms/fold_constant.cc | 2 +- src/relay/transforms/partial_eval.cc | 2 +- src/target/build_common.h | 16 - src/target/llvm/codegen_amdgpu.cc | 64 +- src/target/llvm/codegen_blob.cc| 2 +- src/target/llvm/codegen_nvptx.cc | 34 +- src/target/llvm/llvm_module.cc | 10 +- src/target/tag.cc | 77 ++ src/target/target.cc | 784 +++-- src/target/target_kind.cc | 335 + src/topi/schedule.cc | 2 +- tests/cpp/build_module_test.cc | 8 +- tests/cpp/relay_build_module_test.cc | 2 +- tests/cpp/relay_transform_sequential_test.cc | 2 +- tests/cpp/target_test.cc | 19 +- tests/cpp/utvm_runtime_standalone_test.cc | 2 +- tests/python/contrib/test_ethosn/infrastructure.py | 4 +- tests/python/integration/test_ewise.py | 4 +- tests/python/integration/test_gemm.py | 2 +- tests/python/integration/test_winograd_nnpack.py | 2 +- tests/python/relay/test_backend_compile_engine.py | 4 +- tests/python/relay/test_backend_interpreter.py | 2 +- tests/python/relay/test_pass_alter_op_layout.py| 6 +- tests/python/relay/test_pass_auto_quantize.py | 2 +- tests/python/relay/test_pass_fold_constant.py | 2 +- tests/python/relay/test_pass_manager.py| 2 +- tests/python/relay/test_pass_qnn_legalize.py | 36 +- tests/python/topi/python/test_fifo_buffer.py | 4 +- tests/python/topi/python/test_topi_batch_matmul.py | 2 +- .../topi/python/test_topi_bitserial_conv2d.py | 4 +- .../topi/python/test_topi_bitserial_conv2d_rasp.py | 2 +- tests/python/topi/python/test_topi_bnn.py | 2 +- tests/python/topi/python/test_topi_broadcast.py| 10 +-
[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6369: [Target] Target Tags, Composite Target and Unified Interface
junrushao1994 commented on pull request #6369: URL: https://github.com/apache/incubator-tvm/pull/6369#issuecomment-690365841 @tqchen CI is green. Would you like to take another look? Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #6401: [Rust] Improve the error reporting in build.rs files by using anyhow.
tqchen commented on pull request #6401: URL: https://github.com/apache/incubator-tvm/pull/6401#issuecomment-690364824 Thanks @imalsogreg @adelbertc @jroesch ! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen merged pull request #6401: [Rust] Improve the error reporting in build.rs files by using anyhow.
tqchen merged pull request #6401: URL: https://github.com/apache/incubator-tvm/pull/6401 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (3a4e61a -> b05aa96)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from 3a4e61a [METAL] set MTLBuffer purgeable state (#6376) (#6438) add b05aa96 [Rust] Improve the error reporting in build.rs files by using anyhow. (#6401) No new revisions were added by this update. Summary of changes: apps/sgx/Cargo.toml | 3 ++ apps/wasm-standalone/wasm-graph/build.rs | 1 - rust/tvm-graph-rt/tests/test_nn/Cargo.toml| 1 + rust/tvm-graph-rt/tests/test_nn/build.rs | 22 - rust/tvm-graph-rt/tests/test_tvm_basic/Cargo.toml | 1 + rust/tvm-graph-rt/tests/test_tvm_basic/build.rs | 27 +-- rust/tvm-graph-rt/tests/test_tvm_dso/Cargo.toml | 4 ++ rust/tvm-graph-rt/tests/test_tvm_dso/build.rs | 20 + rust/tvm-sys/Cargo.toml | 1 + rust/tvm-sys/build.rs | 55 --- rust/tvm/examples/resnet/Cargo.toml | 3 ++ rust/tvm/examples/resnet/build.rs | 9 +++- rust/tvm/tests/basics/Cargo.toml | 3 ++ rust/tvm/tests/basics/build.rs| 10 +++-- 14 files changed, 105 insertions(+), 55 deletions(-)
[GitHub] [incubator-tvm] tqchen commented on issue #6441: ONNX strided slice ignoring stride argument
tqchen commented on issue #6441: URL: https://github.com/apache/incubator-tvm/issues/6441#issuecomment-690360234 cc @jwfromm @masahi This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on issue #6376: [BUG] Memory leak in Metal runtime device api
tqchen commented on issue #6376: URL: https://github.com/apache/incubator-tvm/issues/6376#issuecomment-690358667 Thanks @jacobpostman @vathysjacob it would also be nice if you can experiment further about what happens to the rest of the memory This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (8705cea -> 3a4e61a)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from 8705cea [Relay, Torch] Fix stack op axis check, support torch::stack conversion for a static list (#6433) add 3a4e61a [METAL] set MTLBuffer purgeable state (#6376) (#6438) No new revisions were added by this update. Summary of changes: src/runtime/metal/metal_device_api.mm | 8 +++- 1 file changed, 7 insertions(+), 1 deletion(-)
[GitHub] [incubator-tvm] tqchen merged pull request #6438: [METAL] set MTLBuffer purgeable state (#6376)
tqchen merged pull request #6438: URL: https://github.com/apache/incubator-tvm/pull/6438 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] t-vi commented on pull request #6444: CUDA: broaden path detection
t-vi commented on pull request #6444: URL: https://github.com/apache/incubator-tvm/pull/6444#issuecomment-690298687 @tqchen @junrushao1994 @vinx13 I think the last PRs on this file were reviewed by you. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] t-vi commented on pull request #6444: CUDA: broaden path detection
t-vi commented on pull request #6444: URL: https://github.com/apache/incubator-tvm/pull/6444#issuecomment-690255750 At some point we might ask which supported cuda versions are in the else case (looks like this is only CUDA < 9 to me). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] t-vi opened a new pull request #6444: CUDA: broaden path detection
t-vi opened a new pull request #6444: URL: https://github.com/apache/incubator-tvm/pull/6444 Debian/Ubuntu repackaged CUDA has slightly different paths Also, add CUDA versions 10.1, 10.2. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] Beya2019 opened a new pull request #6443: [RELAY][OP] roi_align operator alter layout
Beya2019 opened a new pull request #6443: URL: https://github.com/apache/incubator-tvm/pull/6443 RFC: #4335 https://discuss.tvm.ai/t/layout-conversion-pass/4009 add aoi_align operator(in maskrcnn) convert_op_layout and related test case in test_pass_convert_op_layout.py . Would you please have a look at this @yzhliu @vinx13 @anijain2305 @tqchen This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] Beya2019 closed pull request #6439: [RELAY][OP] roi_align operator alter layout
Beya2019 closed pull request #6439: URL: https://github.com/apache/incubator-tvm/pull/6439 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] lhutton1 opened a new pull request #6442: [BUG][ConvertLayout] Fix qnn.conv2d layout conversion too many values to unpack
lhutton1 opened a new pull request #6442: URL: https://github.com/apache/incubator-tvm/pull/6442 This patch follows a previous bugfix in #6419. I made a very simple oversight for qnn.conv2d in that tinfos also contains qnn parameters. Therefore, we need to extract data_info and weight_info differently. cc @leandron @anijain2305 @JoeyTPChou This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi commented on pull request #6440: [ROCm] include mcpu and mtriple as target options
masahi commented on pull request #6440: URL: https://github.com/apache/incubator-tvm/pull/6440#issuecomment-690208719 Nice, just today I did a clean install of rocm 3.7, and wondered why the benchmark is broken. So timely for me :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (fdef79d -> 8705cea)
This is an automated email from the ASF dual-hosted git repository. masahi pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from fdef79d hot fix (#6434) add 8705cea [Relay, Torch] Fix stack op axis check, support torch::stack conversion for a static list (#6433) No new revisions were added by this update. Summary of changes: python/tvm/relay/frontend/pytorch.py | 24 ++--- src/relay/op/tensor/transform.cc | 5 +++-- tests/python/frontend/pytorch/test_forward.py | 31 +++ tests/python/frontend/pytorch/test_lstm.py| 1 + tests/python/relay/test_op_level3.py | 3 ++- 5 files changed, 58 insertions(+), 6 deletions(-)
[GitHub] [incubator-tvm] masahi merged pull request #6433: [Relay, Torch] Fix stack op axis check, support torch::stack conversion for a static list
masahi merged pull request #6433: URL: https://github.com/apache/incubator-tvm/pull/6433 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] u99127 commented on a change in pull request #6355: [BYOC][ETHOSN] Introduce further operator support
u99127 commented on a change in pull request #6355: URL: https://github.com/apache/incubator-tvm/pull/6355#discussion_r486264884 ## File path: tests/python/contrib/test_ethosn/test_networks.py ## @@ -0,0 +1,163 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +"""Ethos-N integration end-to-end network tests""" + +import pytest +pytest.importorskip('tflite') +pytest.importorskip('tensorflow') + +from tvm import relay +from tvm.relay.op.contrib.ethosn import ethosn_available, Available +from tvm.contrib import download +import tvm.relay.testing.tf as tf_testing +import tflite.Model +from . import infrastructure as tei + + +def _get_tflite_model(tflite_model_path, inputs_dict, dtype): +with open(tflite_model_path, 'rb') as f: +tflite_model_buffer = f.read() + +try: +tflite_model = tflite.Model.Model.GetRootAsModel(tflite_model_buffer, 0) +except AttributeError: +tflite_model = tflite.Model.GetRootAsModel(tflite_model_buffer, 0) +shape_dict = {} +dtype_dict = {} +for input in inputs_dict: +input_shape = inputs_dict[input] +shape_dict[input] = input_shape +dtype_dict[input] = dtype + +return relay.frontend.from_tflite( +tflite_model, +shape_dict=shape_dict, +dtype_dict=dtype_dict, +) + + +def _test_image_network(model_url, model_sub_path, input_dict, compile_hash, output_count, run=True, host_ops=0, npu_partitions=1): +if not ethosn_available(): +return + +def get_model(): +if model_url[-3:] in ("tgz", "zip"): +model_path = tf_testing.get_workload_official( +model_url, +model_sub_path, +) +else: +model_path = download.download_testdata( +model_url, +model_sub_path, +) +return _get_tflite_model(model_path, input_dict, 'uint8') + +outputs = [] +inputs = {} +for input_name in input_dict: +input_shape = input_dict[input_name] +inputs[input_name] = tei.get_real_image(input_shape[1], input_shape[2]) + +for npu in [False, True]: +mod, params = get_model() +graph, lib, params = tei.build(mod, params, npu=npu, expected_host_ops=host_ops, npu_partitions=npu_partitions) +if npu: +tei.assert_lib_hash(lib, compile_hash) Review comment: Hi Zhi, In an ideal world we would run this with hardware in the CI and then known good runtime output does the right thing. However in the absence of testing of runtime outputs of an inference, I would be less comfortable without checking against known good compile time output. In static compilers we approximate this by checking against known good assembler output . I view the check against the hashes in a similar vein. Checking against the json gives us confidence that something is offloaded but there isn't enough confidence that the code generated continues to remain suitable for Ethos-N77. The hashes have been relatively stable and have changed in my memory for 1 of 2 reasons below. @mbaret and @Leo-arm can correct me if I've missed something below. 1. Changes to the NPUSW library underneath. but that changes only with changes to the docker file and thus is controlled . 2. Changes for adding newer operators and again thus changes to the Ethos-N port of TVM. There is a theoretical possibility that these change because of fixups to API changes in TVM but we haven't seen this in the last 3 months IIRC with pretty regular (more than twice a week) rebasing when working on this activity. @mbaret and @Leo-arm can correct my memory. If it looks like the hashes are creating friction for developers in the community while developing maybe we can revisit this. regards Ramana This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] j-paulus opened a new issue #6441: ONNX strided slice ignoring stride argument
j-paulus opened a new issue #6441: URL: https://github.com/apache/incubator-tvm/issues/6441 If strided slice is used in a model, the stride argument is ignored and the result is wrong. I encountered the problem when trying to compile an ONNX model created by pytorch conversion. Similar problem was present in the pytorch frontend (#6414), and was fixed by #6418. Possibly related issue #6316. Code to reproduce the problem: ``` import torch import tvm from tvm import relay import onnx class TriggerBug(torch.nn.Module): def __init__(self): super(TriggerBug, self).__init__() def forward(self, x): return x[..., 0::2] + x[..., 1::2] x_in = torch.randn(1, 4) torch_model = TriggerBug() onnx_name = 'strided_slice.onnx' example_output = torch_model(x_in) # convert to ONNX torch.onnx.export(torch_model, (x_in,), onnx_name, verbose=True, example_outputs=example_output, input_names=['x'], output_names=['y'], opset_version=10, enable_onnx_checker=True) onnx_model = onnx.load(onnx_name) target = 'llvm' shape_dict = {'x': x_in.shape} mod, params = relay.frontend.from_onnx(onnx_model, shape_dict) with tvm.transform.PassContext(opt_level=1): intrp = relay.build_module.create_executor('graph', mod, tvm.cpu(0), target) ``` The traceback: > mod, params = relay.frontend.from_onnx(onnx_model, shape_dict) > File "/Users/name/opt/anaconda3/envs/tvm/lib/python3.7/site-packages/tvm-0.7.dev1-py3.7-macosx-10.9-x86_64.egg/tvm/relay/frontend/onnx.py", line 2456, in from_onnx > mod, params = g.from_onnx(graph, opset) > File "/Users/name/opt/anaconda3/envs/tvm/lib/python3.7/site-packages/tvm-0.7.dev1-py3.7-macosx-10.9-x86_64.egg/tvm/relay/frontend/onnx.py", line 2302, in from_onnx > return IRModule.from_expr(func), self._params > File "/Users/name/opt/anaconda3/envs/tvm/lib/python3.7/site-packages/tvm-0.7.dev1-py3.7-macosx-10.9-x86_64.egg/tvm/ir/module.py", line 236, in from_expr > return _ffi_api.Module_FromExpr(expr, funcs, defs) > File "/Users/name/opt/anaconda3/envs/tvm/lib/python3.7/site-packages/tvm-0.7.dev1-py3.7-macosx-10.9-x86_64.egg/tvm/_ffi/_ctypes/packed_func.py", line 225, in __call__ > raise get_last_ffi_error() > tvm._ffi.base.TVMError: Traceback (most recent call last): > [bt] (8) 9 libtvm.dylib0x000122684df8 TVMFuncCall + 72 > [bt] (7) 8 libtvm.dylib0x000121b8e452 std::__1::__function::__func, tvm::Map)>::AssignTypedLambda(tvm::$_9)::'lambda'(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*), std::__1::allocator, tvm::Map)>::AssignTypedLambda(tvm::$_9)::'lambda'(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)>, void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&) + 610 > [bt] (6) 7 libtvm.dylib0x000121b7f810 tvm::IRModule::FromExpr(tvm::RelayExpr const&, tvm::Map const&, tvm::Map const&) + 1040 > [bt] (5) 6 libtvm.dylib0x000121b7ca47 tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool) + 183 > [bt] (4) 5 libtvm.dylib0x000121b7c4ef tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function) + 1103 > [bt] (3) 4 libtvm.dylib0x0001224dca20 tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&) + 544 > [bt] (2) 3 libtvm.dylib0x0001224dbbc7 tvm::relay::TypeInferencer::Infer(tvm::RelayExpr) + 119 > [bt] (1) 2 libtvm.dylib0x000121b6d87c tvm::ErrorReporter::RenderErrors(tvm::IRModule const&, bool) + 5308 > [bt] (0) 1 libtvm.dylib0x0001219917bf dmlc::LogMessageFatal::~LogMessageFatal() + 111 > File "/Users/name/code/python/tvm/src/ir/error.cc", line 132 > TVMError: > Error(s) have occurred. The program has been annotated with them: > > In `main`: > #[version = "0.0.5"] > fn (%x: Tensor[(1, 4), float32]) { > %0 = strided_slice(%x, begin=[0, 0], end=[2147483647, 9223372036854775807], strides=[1]); > %1 = strided_slice(%x, begin=[0, 1], end=[2147483647, 9223372036854775807], strides=[1]); > add(%0, %1) Incompatible broadcast type TensorType([1, 4], float32) and TensorType([1, 3], float32); > } The intermediate ONNX graph is: > graph(%x : Float(1:4, 4:1, requires_grad=0, device=cpu)): > %1 : Tensor = onnx::Constant[value={1}]() > %2 : Tensor = onnx::Constant[value={0}]() > %3 : Tensor =
[GitHub] [incubator-tvm] t-vi opened a new pull request #6440: [ROCm] include mcpu and mtriple as target options
t-vi opened a new pull request #6440: URL: https://github.com/apache/incubator-tvm/pull/6440 This fixes the ROCm backend after #6347 to also allow mcpu and mtriple attributes for ROCm target objects. With this change, we can run the gpu_imagenet_bench.py again. @junrushao1994 @masahi This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] Beya2019 opened a new pull request #6439: [RELAY][OP] roi_align operator alter layout
Beya2019 opened a new pull request #6439: URL: https://github.com/apache/incubator-tvm/pull/6439 RFC: #4335 https://discuss.tvm.ai/t/layout-conversion-pass/4009 add roi_align operator(in maskrcnn) convert_op_layout and related test case in test_pass_convert_op_layout.py . Would you please have a look at this @yzhliu @anijain2305 @tqchen This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi edited a comment on issue #6268: TVMError: Check failed: it != type_definitions.end(): There is no definition of static_tensor_float32_*
masahi edited a comment on issue #6268: URL: https://github.com/apache/incubator-tvm/issues/6268#issuecomment-690044821 oh I've just tried the above script (reproduced below) on torch 1.6, and it seems they fixed it: ``` import torch import numpy as np lhs = torch.zeros((), dtype=torch.int64) rhs = 5 * np.ones([]).astype("int64") # what prim::NumToTensor(5) above converts to in our frontend print(torch.result_type(lhs, 5)) print(torch.result_type(lhs, rhs)) ``` Output ``` torch.int64 torch.int64 ``` @nolanliou so for both `torch.result_type` (fixed in 1.6) and integer div (no longer supported) reasons, I suggest updating your code to avoid integer div and upgrade to PyTorch 1.6 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] jacobpostman opened a new pull request #6438: [METAL] set MTLBuffer purgeable state (#6376)
jacobpostman opened a new pull request #6438: URL: https://github.com/apache/incubator-tvm/pull/6438 When using manual reference counting, MTLBuffer purgeable state should be set before releasing. cc @tqchen Resolves most of the issue reported in issue (#6376) . Approx ~800KB memory leak still occurs when loading a new or reloading a model. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi edited a comment on issue #6268: TVMError: Check failed: it != type_definitions.end(): There is no definition of static_tensor_float32_*
masahi edited a comment on issue #6268: URL: https://github.com/apache/incubator-tvm/issues/6268#issuecomment-690044821 oh I've just tried the above script (reproduced below) on torch 1.6, and it seems they fixed it: ``` import torch import numpy as np lhs = torch.zeros((), dtype=torch.int64) rhs = 5 * np.ones([]).astype("int64") # what prim::NumToTensor(5) above converts to in our frontend print(torch.result_type(lhs, 5)) print(torch.result_type(lhs, rhs)) ``` Output ``` torch.int64 torch.int64 ``` @nolanliou so for both `torch.result_type` (fixed in 1.6) and integer div (no longer supported), I suggest updating your code to avoid integer div and upgrade to PyTorch 1.6 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi edited a comment on issue #6268: TVMError: Check failed: it != type_definitions.end(): There is no definition of static_tensor_float32_*
masahi edited a comment on issue #6268: URL: https://github.com/apache/incubator-tvm/issues/6268#issuecomment-689960903 ok reproduced on torch 1.4 First, this is the input Torchscript IR: ``` graph(%x : Long(4, 5)): %1 : int = prim::Constant[value=1]() # test.py:8:0 %2 : int = aten::size(%x, %1) # test.py:8:0 %3 : Long() = prim::NumToTensor(%2) %4 : Long(4, 5) = aten::div_(%x, %3) # test.py:8:0 return (%4) ``` It seems this is due to the unclear behavior of `torch.result_type` which we use to promote dtype of lhs and rhs: https://github.com/apache/incubator-tvm/blob/eee413f9d9f1157b37adf39060dda1991841/python/tvm/relay/frontend/pytorch.py#L130 Even though both lhs and rhs are clearly int64, result_type can return float32: ``` import torch import numpy as np lhs = torch.zeros((), dtype=torch.int64) rhs = 5 * np.ones([]).astype("int64") # what prim::NumToTensor(5) above converts to in our frontend print(torch.result_type(lhs, 5)) print(torch.result_type(lhs, rhs)) ``` This is the output with torch 1.4: (UPDATE: It seems it is fixed in 1.6, see below) ``` torch.int64 torch.float32 ``` Since PyTorch decides that float32 is the right type, unnecessary cast is introduced, giving the error above. cc @t-vi @siju-samuel What should we do about it? The easiest solution seems to be just returning a python integer instead of making numpy scalar in `numtotensor` converter below. https://github.com/apache/incubator-tvm/blob/eee413f9d9f1157b37adf39060dda1991841/python/tvm/relay/frontend/pytorch.py#L1101 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi commented on issue #6268: TVMError: Check failed: it != type_definitions.end(): There is no definition of static_tensor_float32_*
masahi commented on issue #6268: URL: https://github.com/apache/incubator-tvm/issues/6268#issuecomment-690044821 oh I've just tried the above script (reproduced below) on torch 1.6, and it seems they fixed it: ``` import torch import numpy as np lhs = torch.zeros((), dtype=torch.int64) rhs = 5 * np.ones([]).astype("int64") # what prim::NumToTensor(5) above converts to in our frontend print(torch.result_type(lhs, 5)) print(torch.result_type(lhs, rhs)) ``` Output ``` torch.int64 torch.int64 ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] t-vi edited a comment on issue #6268: TVMError: Check failed: it != type_definitions.end(): There is no definition of static_tensor_float32_*
t-vi edited a comment on issue #6268: URL: https://github.com/apache/incubator-tvm/issues/6268#issuecomment-690039542 > `torch.result_type` is confused with one of the inputs being numpy scalar of type np.int64, and it returns float32 when both lhs and rhs are clearly int64 Oh, indeed, I missed that at first! I tried to cover numpy scalars as good as I can, but I'll have to fix it before calling result_type. But at the same time, division is special w.r.t. result type. For the quantization: I would not hold my breath and try to cope with the representation of quantization as is. I'm looking at quantizing some models, so I might see how they fare in TVM. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] t-vi edited a comment on issue #6268: TVMError: Check failed: it != type_definitions.end(): There is no definition of static_tensor_float32_*
t-vi edited a comment on issue #6268: URL: https://github.com/apache/incubator-tvm/issues/6268#issuecomment-690039542 > `torch.result_type` is confused with one of the inputs being numpy scalar of type np.int64, and it returns float32 when both lhs and rhs are clearly int64 Oh, indeed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] t-vi edited a comment on issue #6268: TVMError: Check failed: it != type_definitions.end(): There is no definition of static_tensor_float32_*
t-vi edited a comment on issue #6268: URL: https://github.com/apache/incubator-tvm/issues/6268#issuecomment-690039542 > `torch.result_type` is confused with one of the inputs being numpy scalar of type np.int64, and it returns float32 when both lhs and rhs are clearly int64 No it is not. The quotient of two int64 is very reasonably float32. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org