[GitHub] [incubator-tvm] fernchen commented on pull request #6206: [Caffe Frontend] introduce caffe frontend for tvm
fernchen commented on pull request #6206: URL: https://github.com/apache/incubator-tvm/pull/6206#issuecomment-669719467 > > > Hi @tqchen @FrozenGene: > > > I adopted the suggestion that just add caffe env in the ci_cpu, see [#6023 (comment)](https://github.com/apache/incubator-tvm/pull/6023#discussion_r461209556) > > > But now there are some errors in this pr when doing ci_gpu. I have read the error log, and find that it will try to generate docs by executing this script tutorials/frontend/from_caffe.py in tvmai/ci-gpu:v0.64, and obviously there is no Caffe env in tvmai/ci-gpu:v0.64. So is there any way to avoid this problem, since we only have caffe env in ci_cpu? > > > > > > I think we have to install caffe to gpu docker too. > > Aha,what a bad news! Shall we append caffe in gpu docker? I can proposal a new pr. Need more advice from you @tqchen This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] fernchen commented on pull request #6206: [Caffe Frontend] introduce caffe frontend for tvm
fernchen commented on pull request #6206: URL: https://github.com/apache/incubator-tvm/pull/6206#issuecomment-669718865 > > Hi @tqchen @FrozenGene: > > I adopted the suggestion that just add caffe env in the ci_cpu, see [#6023 (comment)](https://github.com/apache/incubator-tvm/pull/6023#discussion_r461209556) > > But now there are some errors in this pr when doing ci_gpu. I have read the error log, and find that it will try to generate docs by executing this script tutorials/frontend/from_caffe.py in tvmai/ci-gpu:v0.64, and obviously there is no Caffe env in tvmai/ci-gpu:v0.64. So is there any way to avoid this problem, since we only have caffe env in ci_cpu? > > I think we have to install caffe to gpu docker too. Aha,what a bad news! Shall we append caffe in gpu docker? I can proposal a new pr. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] windclarion opened a new pull request #6221: [TFLite] axis can be a scalar
windclarion opened a new pull request #6221: URL: https://github.com/apache/incubator-tvm/pull/6221 if tensor's size == 1, then "tuple(axis_value)" will raise "TypeError: iteration over a 0-d array" This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] windclarion opened a new pull request #6220: [C++ RPC] fix typo to keep same with source code
windclarion opened a new pull request #6220: URL: https://github.com/apache/incubator-tvm/pull/6220 apps/cpp_rpc/main.cc L78 int port_end = 9099, but help message and some comment say it is 9199. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] csullivan commented on a change in pull request #6069: [TIR][BugFix] Avoid simplifying substituted tir.Any expressions for layout transformations
csullivan commented on a change in pull request #6069: URL: https://github.com/apache/incubator-tvm/pull/6069#discussion_r466153094 ## File path: src/tir/ir/data_layout.cc ## @@ -323,7 +323,12 @@ inline Array TransformShape(const Array& src_shape, if (symbolic_var_set.count(i)) { result.push_back(tir::Any()); } else { -result.push_back(ana.Simplify(tir::Substitute(rule, bind_map))); +auto sub = tir::Substitute(rule, bind_map); +if (sub.as()) { + result.push_back(tir::Any()); +} else { + result.push_back(ana.Simplify(sub)); +} Review comment: I ended up avoiding the layout transformation on the dynamic dimension by specifying the shape for the input which unblocked me, making this a bit lower priority. I could still try to push this through if you prefer but thinking I will close for now and reopen at a later time. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] csullivan commented on a change in pull request #6069: [TIR][BugFix] Avoid simplifying substituted tir.Any expressions for layout transformations
csullivan commented on a change in pull request #6069: URL: https://github.com/apache/incubator-tvm/pull/6069#discussion_r466153094 ## File path: src/tir/ir/data_layout.cc ## @@ -323,7 +323,12 @@ inline Array TransformShape(const Array& src_shape, if (symbolic_var_set.count(i)) { result.push_back(tir::Any()); } else { -result.push_back(ana.Simplify(tir::Substitute(rule, bind_map))); +auto sub = tir::Substitute(rule, bind_map); +if (sub.as()) { + result.push_back(tir::Any()); +} else { + result.push_back(ana.Simplify(sub)); +} Review comment: I ended up avoiding the layout transformation on the dynamic dimension by specifying it from the input which unblocked me, making this a bit lower priority. I could still try to push this through if you prefer but thinking I will close for now and reopen at a later time. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] csullivan closed pull request #6069: [TIR][BugFix] Avoid simplifying substituted tir.Any expressions for layout transformations
csullivan closed pull request #6069: URL: https://github.com/apache/incubator-tvm/pull/6069 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #6062: [Relay][Pass] Support combine multiple dense op just into dense
tqchen commented on pull request #6062: URL: https://github.com/apache/incubator-tvm/pull/6062#issuecomment-669664663 @MarisaKirisame please followup This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on a change in pull request #6069: [TIR][BugFix] Avoid simplifying substituted tir.Any expressions for layout transformations
tqchen commented on a change in pull request #6069: URL: https://github.com/apache/incubator-tvm/pull/6069#discussion_r466126262 ## File path: src/tir/ir/data_layout.cc ## @@ -323,7 +323,12 @@ inline Array TransformShape(const Array& src_shape, if (symbolic_var_set.count(i)) { result.push_back(tir::Any()); } else { -result.push_back(ana.Simplify(tir::Substitute(rule, bind_map))); +auto sub = tir::Substitute(rule, bind_map); +if (sub.as()) { + result.push_back(tir::Any()); +} else { + result.push_back(ana.Simplify(sub)); +} Review comment: Please followup :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6218: [WIP][Target] Creating Target from JSON-like Configuration
junrushao1994 commented on a change in pull request #6218: URL: https://github.com/apache/incubator-tvm/pull/6218#discussion_r466126075 ## File path: include/tvm/target/target.h ## @@ -93,6 +94,13 @@ class TargetNode : public Object { private: /*! \brief Internal string repr. */ mutable std::string str_repr_; + /*! \brief Parsing TargetNode::attrs from a list of raw strings. */ + ObjectRef ParseAttr(const ObjectRef& obj, const TargetKindNode::ValueTypeInfo& info) const; Review comment: Will check all documents after the impl is done :-) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen merged pull request #6193: [DOCS] Update pass infra tutorial
tqchen merged pull request #6193: URL: https://github.com/apache/incubator-tvm/pull/6193 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated: [DOCS] Update pass infra tutorial (#6193)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git The following commit(s) were added to refs/heads/master by this push: new 5721387 [DOCS] Update pass infra tutorial (#6193) 5721387 is described below commit 57213879e6ccaa4c0e2ba08b0dca075b623a8742 Author: Zhi <5145158+zhi...@users.noreply.github.com> AuthorDate: Wed Aug 5 20:36:01 2020 -0700 [DOCS] Update pass infra tutorial (#6193) * [DOCS] Update pass infra tutorial * update tutorial --- docs/dev/index.rst | 6 +- docs/dev/{relay_pass_infra.rst => pass_infra.rst} | 283 ++--- docs/dev/relay_add_pass.rst| 6 +- .../dev/{relay_pass_infra.py => use_pass_infra.py} | 78 -- 4 files changed, 147 insertions(+), 226 deletions(-) diff --git a/docs/dev/index.rst b/docs/dev/index.rst index c448cb0..2e577df 100644 --- a/docs/dev/index.rst +++ b/docs/dev/index.rst @@ -295,6 +295,11 @@ The following code snippet gives an example of PassContext configuration. Op is the common class to represent all system-defined primitive operator/intrinsics. Developers can register new Ops as well as their additional attributes(e.g. whether the Op is elementwise) to the system. +.. toctree:: + :maxdepth: 1 + + pass_infra + tvm/target -- @@ -353,7 +358,6 @@ memory(for memory optimization). relay_intro relay_op_strategy - relay_pass_infra convert_layout diff --git a/docs/dev/relay_pass_infra.rst b/docs/dev/pass_infra.rst similarity index 67% rename from docs/dev/relay_pass_infra.rst rename to docs/dev/pass_infra.rst index 15487ac..6fd150d 100644 --- a/docs/dev/relay_pass_infra.rst +++ b/docs/dev/pass_infra.rst @@ -15,24 +15,28 @@ specific language governing permissions and limitations under the License. -.. _relay-pass-infra: +.. _pass-infra: -Relay Pass Infrastructure -= +Pass Infrastructure +=== -Relay features a series of optimization passes which improve performance metrics +Both Relay and TVM IR contain a series of optimization passes which improve performance metrics of models such as mean inference, memory footprint, or power consumption for specific devices. There is a suite of standard optimizations as well as machine learning-specific optimizations including constant folding, dead code -elimination, operator layout alteration, and operator fusion, etc. Each of these -passes is structured as a Relay-to-Relay transformation on the abstract syntax -tree (AST) using the analysis result collected during and/or before traversal. +elimination, operator layout alteration, operator fusion, buffer handling, and +loop transformation, etc. Each of these passes is structured as a ir-to-ir +transformation using the analysis result collected during and/or before traversal. -However, as Relay evolves quickly, the need for a more systematic and efficient -way to manage these passes is becoming apparent. This doc describes the design of -such an infra that takes the advantage of the way production compilers are used to -manage the optimization passes and the style modern deep learning frameworks -adopted to build up layers. +However, as TVM evolves quickly, the need for a more systematic and efficient +way to manage these passes is becoming apparent. In addition, a generic +framework that manages the passes across different layers of the TVM stack (e.g. +Relay and tir) paves the way for developers to quickly prototype and plug the +implemented passes into the system. + +This doc describes the design of such an infra that takes the advantage of the +way production compilers are used to manage the optimization passes and the style +modern deep learning frameworks adopted to build up layers. For example, many existing production compilers, such as GCC and LLVM, employ pass managers to effectively manage the execution of passes. Initially managing @@ -88,10 +92,10 @@ needs to be executed when running under a user-provided optimization level. The .. code:: c++ -class PassInfoNode : public RelayNode { - std::string name; +class PassInfoNode : public Object { + String name; int opt_level; - std::vector required; + Array required; }; PassContext @@ -111,17 +115,16 @@ This class is designed for users to conveniently write the Python ``with`` syntax to perform optimizations under a certain configuration. In addition, the users can obtain the context that is available within a certain program scope in a thread-safe way through ``PassContext::Current()``, since a thread-local store -``RelayPassContextThreadLocalStore`` is used to hold the created pass context +``PassContextThreadLocalStore`` is used to hold the created pass context objects. Examples will be provided later to show how we can use
[GitHub] [incubator-tvm] tqchen commented on pull request #5913: [ndarray][autotvm] support ndarray.non_empty
tqchen commented on pull request #5913: URL: https://github.com/apache/incubator-tvm/pull/5913#issuecomment-669662810 Ping This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] FrozenGene commented on pull request #6206: [Caffe Frontend] introduce caffe frontend for tvm
FrozenGene commented on pull request #6206: URL: https://github.com/apache/incubator-tvm/pull/6206#issuecomment-669659791 > Hi @tqchen @FrozenGene: > I adopted the suggestion that just add caffe env in the ci_cpu, see [#6023 (comment)](https://github.com/apache/incubator-tvm/pull/6023#discussion_r461209556) > But now there are some errors in this pr when doing ci_gpu. I have read the error log, and find that it will try to generate docs by executing this script tutorials/frontend/from_caffe.py in tvmai/ci-gpu:v0.64, and obviously there is no Caffe env in tvmai/ci-gpu:v0.64. So is there any way to avoid this problem, since we only have caffe env in ci_cpu? I think we have to install caffe to gpu docker too. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #6187: [Ansor][AutoTVM v2.0] Phase 1: The base class for cost models
FrozenGene commented on a change in pull request #6187: URL: https://github.com/apache/incubator-tvm/pull/6187#discussion_r466110710 ## File path: tests/python/unittest/test_auto_scheduler_cost_model.py ## @@ -0,0 +1,40 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +"""Test cost models""" + +import tvm +from tvm import auto_scheduler + +from test_auto_scheduler_common import matmul_auto_scheduler_test + + +def test_random_model(): Review comment: Final comment. Let us disable this test if users' environment doesn't have llvm. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #6187: [Ansor][AutoTVM v2.0] Phase 1: The base class for cost models
FrozenGene commented on a change in pull request #6187: URL: https://github.com/apache/incubator-tvm/pull/6187#discussion_r466110388 ## File path: tests/python/unittest/test_auto_scheduler_cost_model.py ## @@ -0,0 +1,40 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +"""Test cost models""" + +import tvm +from tvm import auto_scheduler + +from test_auto_scheduler_common import matmul_auto_scheduler_test + + +def test_random_model(): Review comment: ```suggestion def test_random_model(): if not tvm.runtime.enabled("llvm"): return ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] zhanghaohit commented on pull request #6126: [VTA][OpenCL] intelfocl
zhanghaohit commented on pull request #6126: URL: https://github.com/apache/incubator-tvm/pull/6126#issuecomment-669646117 > Thanks @zhanghaohit. This is converging nicely. I made some additional comments. > > In addition, I'd like to request further partitioning given the large size of the PR. > > (1) the following files will need to migrate to #6125: > > * src/relay/op/annotation/annotation.cc > * python/tvm/relay/op/_tensor.py > > (2) changes made for quantization should be isolated to an additional PR, this includes: > > * src/relay/quantize/realize.cc > * python/tvm/relay/quantize/_partition.py > * python/tvm/relay/quantize/_annotate.py changes made for quantization are moved to #6191 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] zhanghaohit commented on pull request #6126: [VTA][OpenCL] intelfocl
zhanghaohit commented on pull request #6126: URL: https://github.com/apache/incubator-tvm/pull/6126#issuecomment-669645896 > Please address aforementioned changes, thank you Done. Thanks @tmoreau89 for the comments. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] slyubomirsky opened a new pull request #6219: [Runtime][WIP] Add prototype Relay AoT compiler directly into TVM
slyubomirsky opened a new pull request #6219: URL: https://github.com/apache/incubator-tvm/pull/6219 In reference to [this RFC](https://discuss.tvm.ai/t/rfc-incorporate-existing-relay-aot-compiler-in-mainline/7393), this PR is intended to incorporate the existing external [Relay ahead-of-time (AoT) compiler](https://github.com/uwsampl/relay-aot), which was primarily written by @MarisaKirisame, into TVM. To start, I am simply including most of the files from the AoT compiler repo nearly verbatim, though the interfaces should be changed to better adhere to the high-level vision for TVM (especially since the initial code comes from a research prototype). The prototype AoT compiler operates by translating Relay ASTs directly into C++ code and using TVM's JIT compiler to register all primitive functions (i.e., the C++ code calls into TVM's operator cache to handle operators). This results in producing a C++ file and requires calling the system's C++ compiler (in the prototype, assuming it to be `clang`). I would be curious to hear others' thoughts (e.g., @jroesch @weberlo @tqchen) about how this compiler can be better integrated into TVM's systems. Based on the discussion in the RFC, it sounds that the interface should be made to take an IRModule and produce a runtime module that can call the compiled functions. Ideally the system could be made modular to allow for target languages other than C++ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen merged pull request #6214: [RUNTIME] Enable auto conversion String->DLDataType
tqchen merged pull request #6214: URL: https://github.com/apache/incubator-tvm/pull/6214 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (e039c87 -> 95045d1)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from e039c87 match pytorch 1.6 googlenet pretrained model (#6201) (#6212) add 95045d1 [RUNTIME] Enable auto conversion String->DLDataType (#6214) No new revisions were added by this update. Summary of changes: include/tvm/runtime/container.h | 39 ++ include/tvm/runtime/packed_func.h | 73 +-- tests/python/unittest/test_node_reflection.py | 2 + 3 files changed, 63 insertions(+), 51 deletions(-)
[GitHub] [incubator-tvm] tqchen commented on a change in pull request #6218: [WIP][Target] Creating Target from JSON-like Configuration
tqchen commented on a change in pull request #6218: URL: https://github.com/apache/incubator-tvm/pull/6218#discussion_r466099841 ## File path: include/tvm/target/target.h ## @@ -93,6 +94,13 @@ class TargetNode : public Object { private: /*! \brief Internal string repr. */ mutable std::string str_repr_; + /*! \brief Parsing TargetNode::attrs from a list of raw strings. */ + ObjectRef ParseAttr(const ObjectRef& obj, const TargetKindNode::ValueTypeInfo& info) const; Review comment: document all arguments This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] fernchen commented on pull request #6206: [Caffe Frontend] introduce caffe frontend for tvm
fernchen commented on pull request #6206: URL: https://github.com/apache/incubator-tvm/pull/6206#issuecomment-669635460 Hi @tqchen @FrozenGene: I adopted the suggestion that just add caffe env in the ci_cpu, see https://github.com/apache/incubator-tvm/pull/6023#discussion_r461209556 But now there are some errors in this pr when doing ci_gpu. I have read the error log, and find that it will try to generate docs by executing this script tutorials/frontend/from_caffe.py in tvmai/ci-gpu:v0.64, and obviously there is no Caffe env in tvmai/ci-gpu:v0.64. So is there any way to avoid this problem, since we only have caffe env in ci_cpu? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] junrushao1994 opened a new pull request #6218: [WIP][Target] Creating Target from JSON-like Configuration
junrushao1994 opened a new pull request #6218: URL: https://github.com/apache/incubator-tvm/pull/6218 Per [RFC](https://discuss.tvm.ai/t/rfc-tvm-target-specification/6844?u=junrushao1994), we want to construct complicated target using dicts, e.g. ``` { "kind": "cuda", "tag": "nvidia/tx2-cudnn", "keys": ["cuda", "gpu"], "libs": ["cudnn"], "target_host": { "kind": "llvm", "system_lib": True, "mtriple": "aarch64-linux-gnu", "mattr": ["+neon"] } } ``` CC: @jwfromm @tqchen @comaniac This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #6216: [JVM] Remove destructive call to System.exit(0) upon timeout
tqchen commented on pull request #6216: URL: https://github.com/apache/incubator-tvm/pull/6216#issuecomment-669613136 @csullivan can you run some quick experiments to confirm whether the finish does kill the other running thread ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] gussmith23 commented on a change in pull request #5812: Bring Your Own Datatypes
gussmith23 commented on a change in pull request #5812: URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r466051436 ## File path: tests/python/unittest/test_custom_datatypes_change_dtype.py ## @@ -0,0 +1,553 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +"""Utilities for changing datatypes of models.""" +import tvm +import topi.testing +import numpy as np +from tvm import relay +from tvm.relay.testing.inception_v3 import get_workload as get_inception +from tvm.relay.testing.resnet import get_workload as get_resnet +from tvm.relay.testing.mobilenet import get_workload as get_mobilenet +from tvm.target.datatype import register, register_min_func, register_op, create_lower_func, lower_ite +from nose.tools import nottest + +tgt = "llvm" + + +def convert_ndarray(dst_dtype, array): +"""Converts an NDArray into the specified datatype""" +x = relay.var('x', shape=array.shape, dtype=str(array.dtype)) +cast = relay.Function([x], x.astype(dst_dtype)) +with tvm.transform.PassContext(config={"tir.disable_vectorize": True}): +return relay.create_executor('graph').evaluate(cast)(array) + + +def change_dtype(src, dst, module, params): +module = relay.frontend.ChangeDatatype(src, dst)(module) +module = relay.transform.InferType()(module) +params = dict((p, convert_ndarray(dst, params[p])) for p in params) +return module, params + + +def setup(): +"""Set up tests + +Currently, this registers some custom datatypes using the Bring Your +Own Datatypes framework. +""" + +# To use datatype operations in an external library, you should first load +# the library containing the datatype implementation: +# CDLL("libposit.so", RTLD_GLOBAL) +# In this case, the datatype library we are using is built right into TVM, +# so we do not need to explicitly load any library. + +# You can pick a code for your datatype arbitrarily, as long as it is +# greater than 128 and has not already been chosen. + +register("posit32", 131) + +register_op(create_lower_func("FloatToPosit32es2"), "Cast", "llvm", +"posit32", "float") +register_op(create_lower_func("Posit32es2ToFloat"), "Cast", "llvm", +"float", "posit32") +register_op(create_lower_func("IntToPosit32es2"), "Cast", "llvm", +"posit32", "int") +register_op(create_lower_func("Posit32es2Add"), "Add", "llvm", "posit32") +register_op(create_lower_func("Posit32es2Sub"), "Sub", "llvm", "posit32") +register_op(create_lower_func("FloatToPosit32es2"), "FloatImm", "llvm", +"posit32") +register_op(create_lower_func("Posit32es2Mul"), "Mul", "llvm", "posit32") +register_op(create_lower_func("Posit32es2Div"), "Div", "llvm", "posit32") +register_op(create_lower_func("Posit32es2Max"), "Max", "llvm", "posit32") +register_op(create_lower_func("Posit32es2Sqrt"), +"Call", +"llvm", +"posit32", +intrinsic_name="sqrt") +# TODO(gus) not sure if this will work... +register_op(lower_ite, +"Call", +"llvm", +"posit32", +intrinsic_name="tvm_if_then_else") +register_op(create_lower_func("Posit32es2Exp"), +"Call", +"llvm", +"posit32", +intrinsic_name="exp") +register_op(create_lower_func("Posit32es2Log"), +"Call", +"llvm", +"posit32", +intrinsic_name="log") +register_op(create_lower_func("Posit32es2Sigmoid"), +"Call", +"llvm", +"posit32", +intrinsic_name="sigmoid") +register_op(create_lower_func("Posit32es2Tanh"), +"Call", +"llvm", +"posit32", +intrinsic_name="tanh") +# TODO(gus) these aren't actually right. these are double min(actually lowest)/max. +register_min_func(lambda num_bits: -1.79769e+308, "posit32") + +register("posit8", 132) +register_op(create_lower_func("FloatToPosit8es0"), "Cast", "llvm", +"posit8", "float")
[GitHub] [incubator-tvm] areusch commented on a change in pull request #6145: [µTVM] Add --runtime=c, remove micro_dev target, enable LLVM backend
areusch commented on a change in pull request #6145: URL: https://github.com/apache/incubator-tvm/pull/6145#discussion_r466047901 ## File path: apps/bundle_deploy/Makefile ## @@ -42,76 +43,70 @@ demo_dynamic: $(build_dir)/demo_dynamic $(build_dir)/bundle.so $(build_dir)/bund TVM_NUM_THREADS=1 $(build_dir)/demo_dynamic $(build_dir)/bundle.so $(build_dir)/cat.bin TVM_NUM_THREADS=1 $(build_dir)/demo_dynamic $(build_dir)/bundle_c.so $(build_dir)/cat.bin -test_dynamic: $(build_dir)/test_dynamic $(build_dir)/test_bundle.so $(build_dir)/test_bundle_c.so $(build_dir)/test_data.bin $(build_dir)/test_output.bin - TVM_NUM_THREADS=1 $(build_dir)/test_dynamic $(build_dir)/test_bundle.so $(build_dir)/test_data.bin $(build_dir)/test_output.bin $(build_dir)/test_graph.json $(build_dir)/test_params.bin - TVM_NUM_THREADS=1 $(build_dir)/test_dynamic $(build_dir)/test_bundle_c.so $(build_dir)/test_data.bin $(build_dir)/test_output.bin $(build_dir)/test_graph.json $(build_dir)/test_params.bin +test_dynamic: $(build_dir)/test_dynamic $(build_dir)/test_bundle.so $(build_dir)/test_bundle_c.so $(build_dir)/test_data_c.bin $(build_dir)/test_output_c.bin $(build_dir)/test_data_cpp.bin $(build_dir)/test_output_cpp.bin + TVM_NUM_THREADS=1 $(build_dir)/test_dynamic $(build_dir)/test_bundle.so $(build_dir)/test_data_cpp.bin $(build_dir)/test_output_cpp.bin $(build_dir)/test_graph_cpp.json $(build_dir)/test_params_cpp.bin + TVM_NUM_THREADS=1 $(build_dir)/test_dynamic $(build_dir)/test_bundle_c.so $(build_dir)/test_data_c.bin $(build_dir)/test_output_c.bin $(build_dir)/test_graph_c.json $(build_dir)/test_params_c.bin demo_static: $(build_dir)/demo_static $(build_dir)/cat.bin TVM_NUM_THREADS=1 $(build_dir)/demo_static $(build_dir)/cat.bin -test_static: $(build_dir)/test_static $(build_dir)/test_data.bin $(build_dir)/test_output.bin - TVM_NUM_THREADS=1 $(build_dir)/test_static $(build_dir)/test_data.bin $(build_dir)/test_output.bin $(build_dir)/test_graph.json $(build_dir)/test_params.bin +test_static: $(build_dir)/test_static $(build_dir)/test_data_c.bin $(build_dir)/test_output_c.bin + TVM_NUM_THREADS=1 $(build_dir)/test_static $(build_dir)/test_data_c.bin $(build_dir)/test_output_c.bin $(build_dir)/test_graph_c.json $(build_dir)/test_params_c.bin $(build_dir)/crt/graph_runtime/libgraph_runtime.a: - cd $(CRT_ROOT) && make QUIET= BUILD_DIR=$(abspath $(build_dir))/crt CRT_CONFIG=$(abspath crt_config/crt_config.h) graph_runtime + cd $(CRT_ROOT) && make QUIET= BUILD_DIR=$(abspath $(build_dir))/crt CRT_CONFIG=$(abspath crt_config/crt_config.h) "EXTRA_CFLAGS=$(PKG_COMPILE_OPTS)" graph_runtime $(build_dir)/crt/common/libcommon.a: - cd $(CRT_ROOT) && make QUIET= BUILD_DIR=$(abspath $(build_dir))/crt CRT_CONFIG=$(abspath crt_config/crt_config.h) common + cd $(CRT_ROOT) && make QUIET= BUILD_DIR=$(abspath $(build_dir))/crt CRT_CONFIG=$(abspath crt_config/crt_config.h) "EXTRA_CFLAGS=$(PKG_COMPILE_OPTS)" common -$(build_dir)/demo_dynamic: demo.cc ${build_dir}/graph.json.c ${build_dir}/params.bin.c +$(build_dir)/demo_dynamic: demo.cc ${build_dir}/graph_c.json.c ${build_dir}/params_c.bin.c @mkdir -p $(@D) g++ $(PKG_CXXFLAGS) -o $@ demo.cc -ldl -$(build_dir)/test_dynamic: test.cc ${build_dir}/test_graph.json ${build_dir}/test_params.bin +$(build_dir)/test_dynamic: test.cc ${build_dir}/test_graph_c.json ${build_dir}/test_params_c.bin @mkdir -p $(@D) g++ $(PKG_CXXFLAGS) -o $@ test.cc -ldl -$(build_dir)/model.o: $(build_dir)/model.c - gcc $(PKG_CFLAGS) -c -o $@ $^ - -$(build_dir)/demo_static: demo_static.c ${build_dir}/bundle_static.o ${build_dir}/func_registry.c ${build_dir}/model.o ${build_dir}/graph.json.c ${build_dir}/params.bin.c ${build_dir}/crt/graph_runtime/libgraph_runtime.a ${build_dir}/crt/common/libcommon.a +$(build_dir)/demo_static: demo_static.c ${build_dir}/bundle_static.o ${build_dir}/model_c.o ${build_dir}/graph_c.json.c ${build_dir}/params_c.bin.c ${build_dir}/crt/graph_runtime/libgraph_runtime.a ${build_dir}/crt/common/libcommon.a @mkdir -p $(@D) - gcc $(PKG_CFLAGS) -o $@ demo_static.c ${build_dir}/bundle_static.o ${build_dir}/func_registry.c ${build_dir}/model.o -lm ${build_dir}/crt/graph_runtime/libgraph_runtime.a ${build_dir}/crt/common/libcommon.a + gcc $(PKG_CFLAGS) -o $@ demo_static.c ${build_dir}/bundle_static.o ${build_dir}/model.o -lm ${build_dir}/crt/graph_runtime/libgraph_runtime.a ${build_dir}/crt/common/libcommon.a -$(build_dir)/test_static: test_static.c ${build_dir}/bundle_static.o ${build_dir}/test_func_registry.c ${build_dir}/test_model.o ${build_dir}/crt/graph_runtime/libgraph_runtime.a ${build_dir}/crt/common/libcommon.a +$(build_dir)/test_static: test_static.c ${build_dir}/bundle_static.o ${build_dir}/test_model_c.o ${build_dir}/crt/graph_runtime/libgraph_runtime.a
[GitHub] [incubator-tvm] areusch commented on a change in pull request #6145: [µTVM] Add --runtime=c, remove micro_dev target, enable LLVM backend
areusch commented on a change in pull request #6145: URL: https://github.com/apache/incubator-tvm/pull/6145#discussion_r466047788 ## File path: tests/python/unittest/test_target_codegen_llvm.py ## @@ -784,26 +784,37 @@ def dotest(do_vectorize): dotest(True) dotest(False) +def test_llvm_crt_static_lib(): +A = te.placeholder((32, ), dtype='bfloat16') +B = te.placeholder((32, ), dtype='bfloat16') +d = te.compute((32, ), lambda x: A[x] + B[x]) +sch = te.create_schedule(d.op) +module = tvm.build(sch, [A, B, d], target=tvm.target.create('llvm --system-lib --runtime=c')) +print(module.get_source()) +module.save('test.o') + + if __name__ == "__main__": -test_multiple_func() -test_llvm_large_uintimm() -test_llvm_import() -test_alignment() -test_rank_zero() -test_rank_zero_bound_checkers() -test_llvm_bool() -test_llvm_persist_parallel() -test_llvm_condition() -test_llvm_vadd_pipeline() -test_llvm_add_pipeline() -test_llvm_intrin() -test_llvm_overloaded_intrin() -test_llvm_flip_pipeline() -test_llvm_madd_pipeline() -test_llvm_temp_space() -test_llvm_lookup_intrin() -test_llvm_div() -test_llvm_fp_math() -test_dwarf_debug_information() -test_llvm_shuffle() -test_llvm_bf16() +# test_multiple_func() Review comment: uncommented This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi commented on a change in pull request #6160: [ONNX]Mod operator, bug fix
masahi commented on a change in pull request #6160: URL: https://github.com/apache/incubator-tvm/pull/6160#discussion_r466036755 ## File path: python/tvm/relay/frontend/onnx.py ## @@ -530,10 +530,11 @@ class Mod(OnnxOpConverter): @classmethod def _impl_v1(cls, inputs, attr, params): assert len(inputs) == 2, "Mod op take 2 inputs, {} given".format(len(inputs)) -if attr['fmod'] == 1: +if attr['fmod'] == 0: Review comment: Can we add a comment here to avoid confusion? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi commented on pull request #6160: [ONNX]Mod operator, bug fix
masahi commented on pull request #6160: URL: https://github.com/apache/incubator-tvm/pull/6160#issuecomment-669550884 I don't know if this is intentional or not, but since we cannot change Relay definition easily, having workaround in the frontend side seems a better alternative. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on issue #6197: [Rust] TVMError: Check failed: type_code_ == kDLInt (1 vs. 0) : expected int but get uint
tqchen commented on issue #6197: URL: https://github.com/apache/incubator-tvm/issues/6197#issuecomment-669540564 Fixed by #6207 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen closed issue #6197: [Rust] TVMError: Check failed: type_code_ == kDLInt (1 vs. 0) : expected int but get uint
tqchen closed issue #6197: URL: https://github.com/apache/incubator-tvm/issues/6197 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] anijain2305 commented on pull request #6217: [TFLite, QNN] Slice op
anijain2305 commented on pull request #6217: URL: https://github.com/apache/incubator-tvm/pull/6217#issuecomment-669522678 @siju-samuel Please review. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] jwfromm commented on pull request #6160: [ONNX]Mod operator, bug fix
jwfromm commented on pull request #6160: URL: https://github.com/apache/incubator-tvm/pull/6160#issuecomment-669510813 @masahi, this PR made me realize that `relay.mod` has the functionality of `np.fmod` and `relay.floor_mod` is equivalent to `np.mod`, which seems kind of backwards. Do you know if that's a bug or intentional? If it's on purpose, we should get this PR merged. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen edited a comment on pull request #6216: [JVM] Remove destructive call to System.exit(0) upon timeout
tqchen edited a comment on pull request #6216: URL: https://github.com/apache/incubator-tvm/pull/6216#issuecomment-669506088 The main question though is whether finish will actually kill the running job on the other thread -- thus achieving the functionality of timeout, or if the stablity comes from the fact that we simply disables the timeout. That would boils down to the semantics of the finish itself, and would be great to confirm This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen edited a comment on pull request #6216: [JVM] Remove destructive call to System.exit(0) upon timeout
tqchen edited a comment on pull request #6216: URL: https://github.com/apache/incubator-tvm/pull/6216#issuecomment-669506088 The main question though is whether finish will actually kill the running job -- thus achieving the functionality of timeout, or if the stablity comes from the fact that we did are not disabling the timeout. That would boils down to the semantics of the finish itself, and would be great to confirm This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #6216: [JVM] Remove destructive call to System.exit(0) upon timeout
tqchen commented on pull request #6216: URL: https://github.com/apache/incubator-tvm/pull/6216#issuecomment-669506088 The main question though is whether finish will actually kill the running job -- thus achieving the functionality of timeout, or if the stablity comes from the fact that we did are not disabling the timeout. That would boils down to the semantics of the finish itself. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] csullivan commented on pull request #6216: [JVM] Remove destructive call to System.exit(0) upon timeout
csullivan commented on pull request #6216: URL: https://github.com/apache/incubator-tvm/pull/6216#issuecomment-669497215 I'm not an expert on JVM in Android, from what I can tell it is recommended against using System.exit in almost cases. It sounds like you are expecting System.exit to kill the TVMRPC app (the way a force stop would) but this does not seem to be the functionality of System.exit, as exit will resume any paused activities left on the activity stack after exiting [[ref]](https://proandroiddev.com/a-cautionary-tale-on-android-do-not-call-system-exit-5279e0d5dbe0). Overall, I am not claiming that the use of finish() instead is necessarily the correct implementation, but empirically the system is much more stable. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi closed issue #6201: GoogleNet aux is None in pytorch test
masahi closed issue #6201: URL: https://github.com/apache/incubator-tvm/issues/6201 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi merged pull request #6212: match pytorch 1.6 googlenet pretrained model (#6201)
masahi merged pull request #6212: URL: https://github.com/apache/incubator-tvm/pull/6212 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated: match pytorch 1.6 googlenet pretrained model (#6201) (#6212)
This is an automated email from the ASF dual-hosted git repository. masahi pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git The following commit(s) were added to refs/heads/master by this push: new e039c87 match pytorch 1.6 googlenet pretrained model (#6201) (#6212) e039c87 is described below commit e039c8755fddfdd14da16a7c3c3c2424f68cddd4 Author: wjliu AuthorDate: Thu Aug 6 04:02:36 2020 +0800 match pytorch 1.6 googlenet pretrained model (#6201) (#6212) --- tests/python/frontend/pytorch/test_forward.py | 6 +- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/tests/python/frontend/pytorch/test_forward.py b/tests/python/frontend/pytorch/test_forward.py index ab9cca1..e370cd5 100644 --- a/tests/python/frontend/pytorch/test_forward.py +++ b/tests/python/frontend/pytorch/test_forward.py @@ -68,7 +68,11 @@ def load_torchvision(model_name): for channel in range(3): input_data[:, channel] -= mean[channel] input_data[:, channel] /= std[channel] -model = getattr(torchvision.models, model_name)(pretrained=True) + +if model_name.startswith("googlenet"): +model = getattr(torchvision.models, model_name)(pretrained=True, aux_logits=True) +else: +model = getattr(torchvision.models, model_name)(pretrained=True) model = model.float().eval() return model, [input_data]
[GitHub] [incubator-tvm] masahi merged pull request #6203: [Relay] pytorch frontend support conv1d
masahi merged pull request #6203: URL: https://github.com/apache/incubator-tvm/pull/6203 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated: [Relay] pytorch frontend support conv1d (#6203)
This is an automated email from the ASF dual-hosted git repository. masahi pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git The following commit(s) were added to refs/heads/master by this push: new 82610aa [Relay] pytorch frontend support conv1d (#6203) 82610aa is described below commit 82610aa42d4e521d0696f53501c75c7de6ac2bc2 Author: Tianming Xu AuthorDate: Thu Aug 6 04:01:27 2020 +0800 [Relay] pytorch frontend support conv1d (#6203) * [Relay] pytorch frontend support conv1d * add tests for conv1d Co-authored-by: xutianming.xtm --- python/tvm/relay/frontend/pytorch.py | 23 +-- tests/python/frontend/pytorch/test_forward.py | 55 ++- 2 files changed, 65 insertions(+), 13 deletions(-) diff --git a/python/tvm/relay/frontend/pytorch.py b/python/tvm/relay/frontend/pytorch.py index 57b64ac..3dfdb2f 100644 --- a/python/tvm/relay/frontend/pytorch.py +++ b/python/tvm/relay/frontend/pytorch.py @@ -752,7 +752,7 @@ def _convolution(): # If groups > 1 but weight_shape[1] != 1, this is group convolution if groups > 1 and weight_shape[1] == 1: channel_multiplier = channels // groups -new_weight_shape = (groups, channel_multiplier, weight_shape[2], weight_shape[3]) +new_weight_shape = (groups, channel_multiplier) + tuple(weight_shape[2:]) weight = _op.transform.reshape(weight, new_weight_shape) kernel_size = weight_shape[2:] @@ -760,12 +760,18 @@ def _convolution(): if isinstance(strides, _expr.Expr): strides = _infer_shape(strides) +if len(kernel_size) == 1: +strides = (1, ) + strides if isinstance(padding, _expr.Expr): padding = _infer_shape(padding) +if len(kernel_size) == 1: +padding = (0, ) + padding if isinstance(dilation, _expr.Expr): dilation = _infer_shape(dilation) +if len(kernel_size) == 1: +dilation = (1, ) + dilation if use_transpose: if len(kernel_size) == 3: @@ -785,6 +791,9 @@ def _convolution(): data_layout = "NCHW" kernel_layout = "OIHW" +if len(kernel_size) == 1: +data = _op.expand_dims(data, axis=2) +weight = _op.expand_dims(weight, axis=2) conv_out = conv_op(data, weight, @@ -793,15 +802,21 @@ def _convolution(): dilation=dilation, groups=groups, channels=channels, - kernel_size=kernel_size, + kernel_size=[1] + kernel_size \ +if len(kernel_size) == 1 \ +else kernel_size, data_layout=data_layout, kernel_layout=kernel_layout, out_layout="", out_dtype="") if use_bias: -return _op.nn.bias_add(conv_out, bias) +res = _op.nn.bias_add(conv_out, bias) else: -return conv_out +res = conv_out +if len(kernel_size) == 1: +res = _op.squeeze(res, axis=[2]) +return res + return _impl def _softmax(): diff --git a/tests/python/frontend/pytorch/test_forward.py b/tests/python/frontend/pytorch/test_forward.py index 6a572db..ab9cca1 100644 --- a/tests/python/frontend/pytorch/test_forward.py +++ b/tests/python/frontend/pytorch/test_forward.py @@ -702,7 +702,8 @@ def test_forward_hardtanh(): def test_forward_conv(): torch.set_grad_enabled(False) -input_shape = [1, 3, 10, 10] +conv1d_input_shape = [1, 3, 10] +conv2d_input_shape = [1, 3, 10, 10] class Conv2D1(Module): def __init__(self): @@ -731,23 +732,59 @@ def test_forward_conv(): def forward(self, *args): return self.softmax(self.conv(args[0])) -input_data = torch.rand(input_shape).float() -verify_model(Conv2D1().float().eval(), input_data=input_data) -verify_model(Conv2D2().float().eval(), input_data=input_data) +class Conv1D1(Module): +def __init__(self): +super(Conv1D1, self).__init__() +self.conv = torch.nn.Conv1d(3, 6, 7) +self.softmax = torch.nn.Softmax() + +def forward(self, *args): +return self.softmax(self.conv(args[0])) + +class Conv1D2(Module): +def __init__(self): +super(Conv1D2, self).__init__() +self.conv = torch.nn.Conv1d(3, 6, 7, bias=False) +self.softmax = torch.nn.Softmax() + +def forward(self, *args): +return self.softmax(self.conv(args[0])) + +class Conv1D3(Module): +def __init__(self): +super(Conv1D3,
[GitHub] [incubator-tvm] masahi commented on pull request #6203: [Relay] pytorch frontend support conv1d
masahi commented on pull request #6203: URL: https://github.com/apache/incubator-tvm/pull/6203#issuecomment-669467932 Thanks @xutianming This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #6216: [JVM] Remove destructive call to System.exit(0) upon timeout
tqchen commented on pull request #6216: URL: https://github.com/apache/incubator-tvm/pull/6216#issuecomment-669438120 This is used to get around the fact that android does not support fork of a process This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #6216: [JVM] Remove destructive call to System.exit(0) upon timeout
tqchen commented on pull request #6216: URL: https://github.com/apache/incubator-tvm/pull/6216#issuecomment-669437605 the intended behavior of the watchdog is actually to kill the process when a timeout Happens. The other running process. We will need to configure the app to auto restart, which will restart an RPC session. So the original exit behavior is intended. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6024: [Relay][TF] Make StridedSlice support dynamic input and constant attrs
kevinthesun commented on a change in pull request #6024: URL: https://github.com/apache/incubator-tvm/pull/6024#discussion_r465948305 ## File path: src/relay/op/tensor/transform.cc ## @@ -2146,7 +2146,18 @@ Array StridedSliceCompute(const Attrs& attrs, const Array(); CHECK(param != nullptr); - if (param->begin && param->end && param->strides) { + + bool dyn = false; + for (auto& v : out_type.as()->shape) { +if (const tir::VarNode* var_node = v.as()) { + if (var_node->name_hint == "any_dim") { +dyn = true; +break; + } +} + } + + if (param->begin && param->end && param->strides && !dyn) { Review comment: Yeah. I think it would be more complicated to fix topi. Probably we can use dynamic stridedslice compute. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] anijain2305 opened a new pull request #6217: [TFLite, QNN] Slice op
anijain2305 opened a new pull request #6217: URL: https://github.com/apache/incubator-tvm/pull/6217 TFLite quantized slice op has same input and output qnn params. Just adding a check and a test case. @d-smirnov Please take a look at this PR if it can help simplify - https://github.com/apache/incubator-tvm/pull/6018 @u99127 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] csullivan opened a new pull request #6216: [JVM] Remove destructive call to System.exit(0) upon timeout
csullivan opened a new pull request #6216: URL: https://github.com/apache/incubator-tvm/pull/6216 Calling System.exit(0) from the watchdog (RPCActivity) emits an interrupt request that goes uncaught in some Android environments resulting in a system crash. I believe the correct non-destructive behavior should be to finish the RPCActivity, thereby popping the activity stack and returning control to the MainActivity, from which the RPCActivity can be restarted using the normal auto-reboot sequence. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (343074f -> 9a362be)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from 343074f [Target] 64-bit RPi4b target (#6211) add 9a362be Pass mfloat-abi to LLVMModule::Init (#6150) No new revisions were added by this update. Summary of changes: src/target/llvm/codegen_blob.cc | 7 ++- src/target/llvm/llvm_module.cc | 5 + 2 files changed, 11 insertions(+), 1 deletion(-)
[GitHub] [incubator-tvm] tqchen closed issue #6157: [feature request] Support for cuda 11
tqchen closed issue #6157: URL: https://github.com/apache/incubator-tvm/issues/6157 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #6213: fix compilation error with cuda 11
tqchen commented on pull request #6213: URL: https://github.com/apache/incubator-tvm/pull/6213#issuecomment-669314072 Thanks @lanchongyizu @cbalint13 ! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen merged pull request #6150: Fix -mfloat-abi=soft compilation for ARM with OpenCL target
tqchen merged pull request #6150: URL: https://github.com/apache/incubator-tvm/pull/6150 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (9a362be -> 1b37163)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from 9a362be Pass mfloat-abi to LLVMModule::Init (#6150) add 1b37163 fix compilation error with cuda 11 (#6213) No new revisions were added by this update. Summary of changes: src/runtime/contrib/cublas/cublas.cc | 4 1 file changed, 4 insertions(+)
[GitHub] [incubator-tvm] tqchen merged pull request #6213: fix compilation error with cuda 11
tqchen merged pull request #6213: URL: https://github.com/apache/incubator-tvm/pull/6213 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on issue #6157: [feature request] Support for cuda 11
tqchen commented on issue #6157: URL: https://github.com/apache/incubator-tvm/issues/6157#issuecomment-669314224 closed by #6213 thanks to @lanchongyizu This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #6018: Added support for tflite quantized maximum and minimum
anijain2305 commented on a change in pull request #6018: URL: https://github.com/apache/incubator-tvm/pull/6018#discussion_r465863240 ## File path: python/tvm/relay/frontend/tflite.py ## @@ -1089,7 +1093,7 @@ def convert_square(self, op): return out -def _convert_elemwise(self, relay_op, op): +def _convert_elemwise(self, relay_op, op, use_real_qnn=True): Review comment: Can we skip the use_real_qnn by moving the check to L1225 and L1229 and keeping _convert_elemwise unchanged? Adding using_real_qnn seems little adhoc. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #6018: Added support for tflite quantized maximum and minimum
anijain2305 commented on a change in pull request #6018: URL: https://github.com/apache/incubator-tvm/pull/6018#discussion_r465862408 ## File path: tests/python/frontend/tflite/test_forward.py ## @@ -250,7 +256,7 @@ def compare_tflite_with_tvm(in_data, in_name, input_tensors, # convert to tflite model converter = tf.lite.TFLiteConverter.from_session( sess, input_tensors, output_tensors) - +converter.experimental_new_converter=experimental_new_converter Review comment: ISn't the term experimental suggests that the feature is not mature yet? Typically, I have seen that the experimental features go through code churn and can be deprecated and API may also change before it gets matured. This is the main reason, I am suggesting not to put this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tkonolige opened a new pull request #6215: [FIX] Verify that tensor reshape is valid.
tkonolige opened a new pull request #6215: URL: https://github.com/apache/incubator-tvm/pull/6215 This PR adds a check to reshape to verify that the input and output dimensions are compatible. Fixes #6210 @mbrookhart This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tkonolige commented on pull request #6182: [Topi,x86] Split MKL from BLAS.
tkonolige commented on pull request #6182: URL: https://github.com/apache/incubator-tvm/pull/6182#issuecomment-669298971 Separating MKL from BLAS has two purposes: 1. We can compare other BLAS implementations vs MKL. I think MKL can be better in general, but we have no way of comparing. 2. MKL has a number of operations that are not part of BLAS like batch matrix products and sparse matrix products. With MKL separate, it is easier to add these features. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] trevor-m commented on pull request #6150: Fix -mfloat-abi=soft compilation for ARM with OpenCL target
trevor-m commented on pull request #6150: URL: https://github.com/apache/incubator-tvm/pull/6150#issuecomment-669295770 @kevinthesun CI is passing now This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] icemelon9 commented on pull request #6182: [Topi,x86] Split MKL from BLAS.
icemelon9 commented on pull request #6182: URL: https://github.com/apache/incubator-tvm/pull/6182#issuecomment-669295129 I can understand separating MKLDNN from the BLAS. But why separate the MKL from BLAS library? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] mbrookhart commented on a change in pull request #6162: [Parser] Parser 2.0 part 2
mbrookhart commented on a change in pull request #6162: URL: https://github.com/apache/incubator-tvm/pull/6162#discussion_r465838414 ## File path: python/tvm/error.py ## @@ -121,3 +121,7 @@ class OpAttributeUnImplemented(OpError, NotImplementedError): "Attribute {} is not supported in operator {}".format( attr_name, op_name)) """ + +@register_error +class DiagnosticError(TVMError): Review comment: Docstring instead of pass? ## File path: src/parser/parser.cc ## @@ -1144,62 +1215,95 @@ class Parser { } // We need a zero-arity case for constructors. - if (expr.as()) { -return Expr(Call(expr, {})); - } else { -return expr; + if (auto ctor_node = expr.as()) { +if (ctor_node->inputs.size() == 0) { + return Expr(Call(expr, {})); +} } + + return expr; }); } + Expr GetOp(const std::string& op_name, const Token& tok) { +try { + return Op::Get(op_name); +} catch (dmlc::Error e) { + this->diag_ctx->Emit(DiagnosticBuilder(DiagnosticLevel::Error, tok->span) + << "operator `" << op_name + << "` not found, perhaps you forgot to register it?"); + return Expr(); +} + } + Expr ParseAtomicExpr() { -return ConsumeWhitespace([this] { +DLOG(INFO) << "Parser::ParseAtomicExpr"; +auto expr = ConsumeWhitespace([this] { auto next = Peek(); switch (next->token_type) { case TokenType::Integer: case TokenType::Float: { Consume(next->token_type); auto number = NumberToNDArray(next); - Expr e = Constant(number); + Expr e = Constant(number, next->span); return e; } case TokenType::Boolean: { Consume(TokenType::Boolean); int value = Downcast(next->data); auto boolean = BooleanToNDarray(value); - Expr e = Constant(boolean); + Expr e = Constant(boolean, next->span); return e; } +// Parse a local of the form `%x`. case TokenType::Local: { Consume(TokenType::Local); return Expr(LookupLocal(next)); } +// Parse a local of the form `@x`. case TokenType::Global: { auto string = next.ToString(); Consume(TokenType::Global); auto global = global_names.Get(string); if (!global) { +// TODO(@jroesch): fix global's needing span information auto global_var = GlobalVar(string); global_names.Add(string, global_var); return Expr(global_var); } else { return Expr(global.value()); } } +// Parse a local of the form `x`. +// Right now we fail to parse `x.y`. case TokenType::Identifier: { - auto string = next.ToString(); - Consume(TokenType::Identifier); - auto ctor = ctors.Get(string); + auto ctor = ctors.Get(next.ToString()); if (ctor) { +Consume(TokenType::Identifier); return Expr(ctor.value()); } else { -return Expr(Op::Get(string)); +auto idents = ParseHierName(); +std::stringstream op_name; +int i = 0; +int periods = idents.size() - 1; +for (auto ident : idents) { + op_name << ident; + if (i < periods) { +op_name << "."; +i++; + } +} Review comment: Comment on this loop? Maybe make it a utility? It's a bit tricky to parse what this does from the surrounding code. ## File path: src/parser/parser.cc ## @@ -1231,14 +1335,38 @@ class Parser { } } default: { - std::stringstream msg; - msg << "expected an expression found " << Pretty(next->token_type); - diag_ctx.Emit({next->line, next->column, msg.str()}); - diag_ctx.Render(std::cout); + this->diag_ctx->EmitFatal(DiagnosticBuilder(DiagnosticLevel::Error, next->span) +<< "expected an expression found " +<< Pretty(next->token_type)); return Expr(); } } }); + +if (WhenMatch(TokenType::Period)) { + auto index = Match(TokenType::Integer).ToNumber(); + expr = relay::TupleGetItem(expr, index); +} + +return expr; + } + + /*! \brief Parse a hierarchical name. */ + Array ParseHierName() { Review comment: Maybe expand this to ParseHierarchicalName? It's not a common shortening, took a while to figure out what the function name meant when I saw it in code. ## File path: src/parser/token.h ## @@ -85,6 +86,9 @@ enum TokenType { Extern, Match, PartialMatch, +
[GitHub] [incubator-tvm] tqchen opened a new pull request #6214: [RUNTIME] Enable auto conversion String->DLDataType
tqchen opened a new pull request #6214: URL: https://github.com/apache/incubator-tvm/pull/6214 cc @jroesch This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6018: Added support for tflite quantized maximum and minimum
d-smirnov commented on a change in pull request #6018: URL: https://github.com/apache/incubator-tvm/pull/6018#discussion_r465833724 ## File path: tests/python/frontend/tflite/test_forward.py ## @@ -250,7 +256,7 @@ def compare_tflite_with_tvm(in_data, in_name, input_tensors, # convert to tflite model converter = tf.lite.TFLiteConverter.from_session( sess, input_tensors, output_tensors) - +converter.experimental_new_converter=experimental_new_converter Review comment: MLIR-based (experimental_new_converter=True) converter is already in use in Tensorflow and in TFLite. Why there is a need to postpone? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6018: Added support for tflite quantized maximum and minimum
d-smirnov commented on a change in pull request #6018: URL: https://github.com/apache/incubator-tvm/pull/6018#discussion_r465831277 ## File path: python/tvm/relay/frontend/tflite.py ## @@ -1089,7 +1093,7 @@ def convert_square(self, op): return out -def _convert_elemwise(self, relay_op, op): +def _convert_elemwise(self, relay_op, op, use_real_qnn=True): Review comment: "use_real_qnn=False" allows __convert_elemwise_ to use non-quantized version of the operation if all supplied parameters and output of the operation have same quantization values. Some other quantized tflite operations also supposed to use this feature This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen merged pull request #6211: [Target] 64-bit RPi4b target
tqchen merged pull request #6211: URL: https://github.com/apache/incubator-tvm/pull/6211 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated: [Target] 64-bit RPi4b target (#6211)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git The following commit(s) were added to refs/heads/master by this push: new 343074f [Target] 64-bit RPi4b target (#6211) 343074f is described below commit 343074fdbaedfabfceb14168e87a0622bbcda36e Author: Thierry Moreau AuthorDate: Wed Aug 5 08:15:31 2020 -0700 [Target] 64-bit RPi4b target (#6211) --- python/tvm/target/target.py | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/python/tvm/target/target.py b/python/tvm/target/target.py index 597f8a5..1cde875 100644 --- a/python/tvm/target/target.py +++ b/python/tvm/target/target.py @@ -188,7 +188,10 @@ def arm_cpu(model='unknown', options=None): "p20": ["-model=kirin970", "-mtriple=arm64-linux-android", "-mattr=+neon"], "p20pro":["-model=kirin970", "-mtriple=arm64-linux-android", "-mattr=+neon"], "rasp3b":["-model=bcm2837", "-mtriple=armv7l-linux-gnueabihf", "-mattr=+neon"], -"rasp4b":["-model=bcm2711", "-mtriple=arm-linux-gnueabihf", "-mattr=+neon"], +"rasp4b":["-model=bcm2711", "-mtriple=armv8l-linux-gnueabihf", "-mattr=+neon", + "-mcpu=cortex-a72"], +"rasp4b64": ["-model=bcm2711", "-mtriple=aarch64-linux-gnu", "-mattr=+neon", + "-mcpu=cortex-a72"], "rk3399":["-model=rk3399", "-mtriple=aarch64-linux-gnu", "-mattr=+neon"], "pynq": ["-model=pynq", "-mtriple=armv7a-linux-eabi", "-mattr=+neon"], "ultra96": ["-model=ultra96", "-mtriple=aarch64-linux-gnu", "-mattr=+neon"],
[GitHub] [incubator-tvm] tqchen commented on pull request #6204: Fix compile warnings.
tqchen commented on pull request #6204: URL: https://github.com/apache/incubator-tvm/pull/6204#issuecomment-669252538 Thanks @cbalint13 @MarisaKirisame @hanzz2007 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #6204: Fix compile warnings.
tqchen commented on pull request #6204: URL: https://github.com/apache/incubator-tvm/pull/6204#issuecomment-669252405 let us go with the warning disable for now. Given that the move can be more efficient(as efficient as RVO) and we do not want to introduce additional complexity here This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen merged pull request #6204: Fix compile warnings.
tqchen merged pull request #6204: URL: https://github.com/apache/incubator-tvm/pull/6204 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated: Fix compile warnings. (#6204)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git The following commit(s) were added to refs/heads/master by this push: new edc5d8f Fix compile warnings. (#6204) edc5d8f is described below commit edc5d8f35eadcc214c77b12a4cf894dcfa4d481c Author: Balint Cristian AuthorDate: Wed Aug 5 18:13:15 2020 +0300 Fix compile warnings. (#6204) --- include/tvm/ir/attrs.h | 4 1 file changed, 4 insertions(+) diff --git a/include/tvm/ir/attrs.h b/include/tvm/ir/attrs.h index d20ba4f..749274a 100644 --- a/include/tvm/ir/attrs.h +++ b/include/tvm/ir/attrs.h @@ -475,6 +475,10 @@ class AttrInitVisitor { } else { opt.value_missing_ = true; } +#if defined(__GNUC__) +#pragma GCC diagnostic ignored "-Wpragmas" +#pragma GCC diagnostic ignored "-Wpessimizing-move" +#endif return std::move(opt); }
[GitHub] [incubator-tvm] Msabih edited a comment on pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards
Msabih edited a comment on pull request #4698: URL: https://github.com/apache/incubator-tvm/pull/4698#issuecomment-669184211 @tmoreau89 I have tried the setup with the same versions of tvm/tensorflow on the host and the board and the "cpu" part of the inference works fine. But when I set the target to edge_tpu, I get this error on the rpc server ``` ERROR: Internal: Unsupported data type: 0 ERROR: Node number 0 (edgetpu-custom-op) failed to prepare ``` And on the host machine, it says ``` File "tvm_inference.py", line 21, in runtime = tflite_runtime.create(f.read(), ctx, runtime_target=target) File "/home/sabih/Documents/phd_work/MAP_WORK/tvm_env/tvm/python/tvm/contrib/tflite_runtime.py", line 49, in create return TFLiteModule(fcreate(bytearray(tflite_model_bytes), ctx)) File "/home/sabih/Documents/phd_work/MAP_WORK/tvm_env/tvm/python/tvm/_ffi/_ctypes/function.py", line 207, in __call__ raise get_last_ffi_error() tvm._ffi.base.TVMError: Traceback (most recent call last): [bt] (3) /tvm_env/tvm/build/libtvm.so(TVMFuncCall+0x69) [0x7f2fb63f8489] [bt] (2) /tvm_env/tvm/build/libtvm.so(std::_Function_handler::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0x46) [0x7f2fb644ad36] [bt] (1) /tvm_env/tvm/build/libtvm.so(tvm::runtime::RPCSession::CallFunc(void*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*, void* (*)(int, tvm::runtime::TVMArgValue const&), tvm::runtime::PackedFunc const*)+0x2c8) [0x7f2fb6454168] [bt] (0) /tvm_env/tvm/build/libtvm.so(+0xc21d6b) [0x7f2fb6450d6b] File "/tvm_env/tvm/src/runtime/rpc/rpc_session.cc", line 993 TVMError: Check failed: code == RPCCode: :kReturn: code=4 ``` The inference directly on the edge TPU works fine. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] Msabih commented on pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards
Msabih commented on pull request #4698: URL: https://github.com/apache/incubator-tvm/pull/4698#issuecomment-669184211 @tmoreau89 I have tried the setup with the same versions of tvm/tensorflow on the host and the board and the "cpu" part of the inference works fine. But when I set the target to edge_tpu, I get this error on the rpc server ``` ERROR: Internal: Unsupported data type: 0 ERROR: Node number 0 (edgetpu-custom-op) failed to prepare ``` And on the host machine, it says ``` File "tvm_inference.py", line 21, in runtime = tflite_runtime.create(f.read(), ctx, runtime_target=target) File "/home/sabih/Documents/phd_work/MAP_WORK/tvm_env/tvm/python/tvm/contrib/tflite_runtime.py", line 49, in create return TFLiteModule(fcreate(bytearray(tflite_model_bytes), ctx)) File "/home/sabih/Documents/phd_work/MAP_WORK/tvm_env/tvm/python/tvm/_ffi/_ctypes/function.py", line 207, in __call__ raise get_last_ffi_error() tvm._ffi.base.TVMError: Traceback (most recent call last): [bt] (3) /tvm_env/tvm/build/libtvm.so(TVMFuncCall+0x69) [0x7f2fb63f8489] [bt] (2) /tvm_env/tvm/build/libtvm.so(std::_Function_handler::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0x46) [0x7f2fb644ad36] [bt] (1) /tvm_env/tvm/build/libtvm.so(tvm::runtime::RPCSession::CallFunc(void*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*, void* (*)(int, tvm::runtime::TVMArgValue const&), tvm::runtime::PackedFunc const*)+0x2c8) [0x7f2fb6454168] [bt] (0) /tvm_env/tvm/build/libtvm.so(+0xc21d6b) [0x7f2fb6450d6b] File "/tvm_env/tvm/src/runtime/rpc/rpc_session.cc", line 993 TVMError: Check failed: code == RPCCode: :kReturn: code=4 ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] xutianming commented on pull request #6203: [Relay] pytorch frontend support conv1d
xutianming commented on pull request #6203: URL: https://github.com/apache/incubator-tvm/pull/6203#issuecomment-669183329 @masahi Tests for conv1d and conv1d_transpose were added. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] jcf94 edited a comment on pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy [WIP]
jcf94 edited a comment on pull request #6184: URL: https://github.com/apache/incubator-tvm/pull/6184#issuecomment-669172637 This PR shares the base class with cost models, will rebase the code after #6187 has been merged. The other parts of code is ready for review. cc @merrymercy @comaniac @FrozenGene @junrushao1994 @tqchen This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] jcf94 commented on pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy [WIP]
jcf94 commented on pull request #6184: URL: https://github.com/apache/incubator-tvm/pull/6184#issuecomment-669172637 This PR shares the base class with cost models, will rebase the code after #6187 has been merged. The other parts of code is ready for review. cc @merrymercy @comaniac @FrozenGene @junrushao1994 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] cbalint13 removed a comment on pull request #6213: fix compilation error with cuda 11
cbalint13 removed a comment on pull request #6213: URL: https://github.com/apache/incubator-tvm/pull/6213#issuecomment-669054715 LGTM This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] cbalint13 commented on pull request #6213: fix compilation error with cuda 11
cbalint13 commented on pull request #6213: URL: https://github.com/apache/incubator-tvm/pull/6213#issuecomment-669054715 LGTM This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] cbalint13 commented on a change in pull request #6204: Fix compile warnings.
cbalint13 commented on a change in pull request #6204: URL: https://github.com/apache/incubator-tvm/pull/6204#discussion_r465533351 ## File path: include/tvm/ir/attrs.h ## @@ -475,6 +475,8 @@ class AttrInitVisitor { } else { opt.value_missing_ = true; } +#pragma GCC diagnostic ignored "-Wpragmas" Review comment: * Restricted pragma to GCC only for this PR motion. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] cbalint13 commented on a change in pull request #6204: Fix compile warnings.
cbalint13 commented on a change in pull request #6204: URL: https://github.com/apache/incubator-tvm/pull/6204#discussion_r465533351 ## File path: include/tvm/ir/attrs.h ## @@ -475,6 +475,8 @@ class AttrInitVisitor { } else { opt.value_missing_ = true; } +#pragma GCC diagnostic ignored "-Wpragmas" Review comment: * Restrict pragma to GCC only. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] cbalint13 commented on a change in pull request #6204: Fix compile warnings.
cbalint13 commented on a change in pull request #6204: URL: https://github.com/apache/incubator-tvm/pull/6204#discussion_r465533351 ## File path: include/tvm/ir/attrs.h ## @@ -475,6 +475,8 @@ class AttrInitVisitor { } else { opt.value_missing_ = true; } +#pragma GCC diagnostic ignored "-Wpragmas" Review comment: * Restricted pragma to GCC only. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] lanchongyizu opened a new pull request #6213: fix compilation error with cuda 11
lanchongyizu opened a new pull request #6213: URL: https://github.com/apache/incubator-tvm/pull/6213 As described in #6157, extra parameter needs to be added when built with CUDA 11. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tmoreau89 commented on a change in pull request #6211: [Target] 64-bit RPi4b target
tmoreau89 commented on a change in pull request #6211: URL: https://github.com/apache/incubator-tvm/pull/6211#discussion_r465510736 ## File path: python/tvm/target/target.py ## @@ -188,7 +188,8 @@ def arm_cpu(model='unknown', options=None): "p20": ["-model=kirin970", "-mtriple=arm64-linux-android", "-mattr=+neon"], "p20pro":["-model=kirin970", "-mtriple=arm64-linux-android", "-mattr=+neon"], "rasp3b":["-model=bcm2837", "-mtriple=armv7l-linux-gnueabihf", "-mattr=+neon"], -"rasp4b":["-model=bcm2711", "-mtriple=arm-linux-gnueabihf", "-mattr=+neon"], +"rasp4b":["-model=bcm2711", "-mtriple=armv8l-linux-gnueabihf", "-mattr=+neon"], +"rasp4b64": ["-model=bcm2711", "-mtriple=aarch64-linux-gnu", "-mattr=+neon"], Review comment: thanks, I've added this additional field. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch tmoreau89-patch-1 updated (3af97bc -> 2e93687)
This is an automated email from the ASF dual-hosted git repository. moreau pushed a change to branch tmoreau89-patch-1 in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from 3af97bc adding 64bit target for rpi add 2e93687 adding mcpu field No new revisions were added by this update. Summary of changes: python/tvm/target/target.py | 6 -- 1 file changed, 4 insertions(+), 2 deletions(-)