[GitHub] [incubator-tvm] woniuasd commented on issue #5133: [Torch] A list of missing op conversion in need of help
woniuasd commented on issue #5133: URL: https://github.com/apache/incubator-tvm/issues/5133#issuecomment-618812554 my pytorch code is "canvas[:, indices] = voxels" The following operators are not implemented: ['aten::index_put_'] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] icemelon9 commented on pull request #5426: [PY][FFI] Refactor runtime.String to subclass str
icemelon9 commented on pull request #5426: URL: https://github.com/apache/incubator-tvm/pull/5426#issuecomment-618803899 Thanks @tqchen @zhiics This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (6c77195 -> e68450d)
This is an automated email from the ASF dual-hosted git repository. haichen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from 6c77195 [FRONTEND][MXNET] support elemwise logic ops (#5361) add e68450d [PY][FFI] Introduce PyNativeObject, enable runtime.String to subclass str (#5426) No new revisions were added by this update. Summary of changes: CMakeLists.txt | 2 +- python/tvm/_ffi/_ctypes/object.py | 31 + python/tvm/_ffi/_ctypes/packed_func.py | 5 +- python/tvm/_ffi/_cython/object.pxi | 31 + python/tvm/_ffi/_cython/packed_func.pxi | 3 + python/tvm/runtime/container.py | 83 +++-- python/tvm/runtime/object.py| 14 ++--- python/tvm/runtime/object_generic.py| 4 +- src/runtime/container.cc| 19 +- src/support/ffi_testing.cc | 5 ++ tests/python/unittest/test_runtime_container.py | 24 +++ 11 files changed, 132 insertions(+), 89 deletions(-)
[GitHub] [incubator-tvm] yongfeng-nv commented on a change in pull request #5367: Improve IntervalSet's floormod
yongfeng-nv commented on a change in pull request #5367: URL: https://github.com/apache/incubator-tvm/pull/5367#discussion_r414263234 ## File path: src/arith/int_set.cc ## @@ -311,6 +311,21 @@ inline IntervalSet Combine(Analyzer* analyzer, LOG(FATAL) << "Modular by zero in CombineInterval Mod"; } if (analyzer->CanProveGreaterEqual(divisor, 0)) { + if (const auto* ptr = b->min_value.as()) { +// a mod b = a - b * (a/b) if +// (i) a_max - a_min < b, i.e. that before mod, a's range doesn't cover [0, b) +// and (ii) a_min mod b <= a_max mod b, i.e. that a's range is still continuous after mod +auto tmax = a->max_value - b->min_value * floordiv(a->max_value, b->min_value); +tmax = analyzer->Simplify(tmax); +auto tmin = a->min_value - b->min_value * floordiv(a->min_value, b->min_value); +tmin = analyzer->Simplify(tmin); +auto tset = IntervalSet(tmin, tmax); +bool within_range = analyzer->CanProveLess(a->max_value - a->min_value, ptr->value); +bool wrap_around = analyzer->CanProve(tset->max_value < tset->min_value); Review comment: I missed the point that CanProve is necessary but not sufficient here. Once I modified the condition to tset->max_value >= tset->min_value, floormod([z*8+x*4, z*8+x*4+3], 8) fell back to [0, 7], matching your earlier suggested floormod implementation, because the it can't prove (((x*4) + 3) - (floordiv(((x*4) + 3), 8)*8)) >= ((x*4) - (floordiv(x, 2)*8)). Therefore, I switch to the simple implementation and modify the test. ## File path: src/arith/int_set.cc ## @@ -311,6 +311,21 @@ inline IntervalSet Combine(Analyzer* analyzer, LOG(FATAL) << "Modular by zero in CombineInterval Mod"; } if (analyzer->CanProveGreaterEqual(divisor, 0)) { + if (const auto* ptr = b->min_value.as()) { +// a mod b = a - b * (a/b) if +// (i) a_max - a_min < b, i.e. that before mod, a's range doesn't cover [0, b) +// and (ii) a_min mod b <= a_max mod b, i.e. that a's range is still continuous after mod +auto tmax = a->max_value - b->min_value * floordiv(a->max_value, b->min_value); +tmax = analyzer->Simplify(tmax); +auto tmin = a->min_value - b->min_value * floordiv(a->min_value, b->min_value); +tmin = analyzer->Simplify(tmin); +auto tset = IntervalSet(tmin, tmax); +bool within_range = analyzer->CanProveLess(a->max_value - a->min_value, ptr->value); +bool wrap_around = analyzer->CanProve(tset->max_value < tset->min_value); +if (within_range && !wrap_around) { + return tset; Review comment: No longer an issue with the update. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] yongfeng-nv commented on a change in pull request #5367: Improve IntervalSet's floormod
yongfeng-nv commented on a change in pull request #5367: URL: https://github.com/apache/incubator-tvm/pull/5367#discussion_r414259311 ## File path: include/tvm/arith/int_set.h ## @@ -152,6 +152,22 @@ class IntSet : public ObjectRef { //--- // Integer set legacy API. // +/*! + * \brief Convert std::unordered_map to Map + * + * \param dom_map The domain map to convert. + * \return The converted map. + */ +Map ConvertDomMap(const std::unordered_map& dom_map); +// /*! +// * \brief Find an symbolic integer set that contains all possible values of +// * e given the domain of each iteration variables. +// * +// * \param e The expression to be evaluated. +// * \param dom_map The domain of each variable. +// * \return An integer set that can cover all the possible values of e. +// */ +// IntSet EvalSet(PrimExpr e, const Map& dom_map); Review comment: My bad. Cleaned it up. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] yongfeng-nv commented on a change in pull request #5367: Improve IntervalSet's floormod
yongfeng-nv commented on a change in pull request #5367: URL: https://github.com/apache/incubator-tvm/pull/5367#discussion_r414259311 ## File path: include/tvm/arith/int_set.h ## @@ -152,6 +152,22 @@ class IntSet : public ObjectRef { //--- // Integer set legacy API. // +/*! + * \brief Convert std::unordered_map to Map + * + * \param dom_map The domain map to convert. + * \return The converted map. + */ +Map ConvertDomMap(const std::unordered_map& dom_map); +// /*! +// * \brief Find an symbolic integer set that contains all possible values of +// * e given the domain of each iteration variables. +// * +// * \param e The expression to be evaluated. +// * \param dom_map The domain of each variable. +// * \return An integer set that can cover all the possible values of e. +// */ +// IntSet EvalSet(PrimExpr e, const Map& dom_map); Review comment: My bad. Clean it up. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #4805: [Frontend][TFlite] Add parser support for relu6, leaky_relu, relu_n1_to_1, log_softmax
tqchen commented on pull request #4805: URL: https://github.com/apache/incubator-tvm/pull/4805#issuecomment-618773667 ping This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #5361: [FRONTEND][MXNET] support elemwise logic ops
tqchen commented on pull request #5361: URL: https://github.com/apache/incubator-tvm/pull/5361#issuecomment-618773375 Thanks @kazum @maheshambule This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated: [FRONTEND][MXNET] support elemwise logic ops (#5361)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git The following commit(s) were added to refs/heads/master by this push: new 6c77195 [FRONTEND][MXNET] support elemwise logic ops (#5361) 6c77195 is described below commit 6c77195e9c277d6ed3f0fbd31dae4ab94d64af25 Author: MORITA Kazutaka AuthorDate: Fri Apr 24 11:54:26 2020 +0900 [FRONTEND][MXNET] support elemwise logic ops (#5361) --- python/tvm/relay/frontend/mxnet.py | 6 ++ tests/python/frontend/mxnet/test_forward.py | 12 +--- 2 files changed, 15 insertions(+), 3 deletions(-) diff --git a/python/tvm/relay/frontend/mxnet.py b/python/tvm/relay/frontend/mxnet.py index be2f110..775eb53 100644 --- a/python/tvm/relay/frontend/mxnet.py +++ b/python/tvm/relay/frontend/mxnet.py @@ -1751,6 +1751,12 @@ _convert_map = { "broadcast_greater_equal": _mx_compare(_op.greater_equal, _rename), "broadcast_lesser" : _mx_compare(_op.less, _rename), "broadcast_lesser_equal" : _mx_compare(_op.less_equal, _rename), +"_equal" : _mx_compare(_op.equal, _rename), +"_not_equal" : _mx_compare(_op.not_equal, _rename), +"_greater" : _mx_compare(_op.greater, _rename), +"_greater_equal" : _mx_compare(_op.greater_equal, _rename), +"_lesser": _mx_compare(_op.less, _rename), +"_lesser_equal" : _mx_compare(_op.less_equal, _rename), "elemwise_add" : _rename(_op.add), "elemwise_sub" : _rename(_op.subtract), "elemwise_mul" : _rename(_op.multiply), diff --git a/tests/python/frontend/mxnet/test_forward.py b/tests/python/frontend/mxnet/test_forward.py index 10edff9..5e4c137 100644 --- a/tests/python/frontend/mxnet/test_forward.py +++ b/tests/python/frontend/mxnet/test_forward.py @@ -328,13 +328,19 @@ def test_forward_broadcast_ops(): def test_forward_elemwise_ops(): for op in ["elemwise_add", "elemwise_sub", "elemwise_mul", - "elemwise_div", "maximum", "minimum"]: + "elemwise_div", "maximum", "minimum", + operator.lt, operator.le, operator.eq, + operator.ne, operator.gt, operator.ge]: shape = (3, 4, 5) dtype = 'float32' a_np = np.random.uniform(size=shape).astype(dtype) b_np = np.random.uniform(size=shape).astype(dtype) -mx_sym = _mx_symbol(mx.sym, op, [mx.sym.var('a'), mx.sym.var('b')]) -ref_res = _mx_symbol(mx.nd, op, [mx.nd.array(a_np), mx.nd.array(b_np)]) +if type(op) == str: +mx_sym = _mx_symbol(mx.sym, op, [mx.sym.var('a'), mx.sym.var('b')]) +ref_res = _mx_symbol(mx.nd, op, [mx.nd.array(a_np), mx.nd.array(b_np)]) +else: +mx_sym = op(mx.sym.var('a'), mx.sym.var('b')) +ref_res = op(mx.nd.array(a_np), mx.nd.array(b_np)) shapes = {'a': shape, 'b': shape} mod, _ = relay.frontend.from_mxnet(mx_sym, shapes, dtype) for target, ctx in ctx_list():
[GitHub] [incubator-tvm] tqchen commented on pull request #5345: [RELAY] Move frontend utils
tqchen commented on pull request #5345: URL: https://github.com/apache/incubator-tvm/pull/5345#issuecomment-618773499 please suggest a resolution(move to a new folder) and act on it :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on issue #5047: [Runtime] Python String container interface
tqchen commented on issue #5047: URL: https://github.com/apache/incubator-tvm/issues/5047#issuecomment-618772966 https://github.com/apache/incubator-tvm/pull/5426 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5426: [PY][FFI] Refactor runtime.String to subclass str
tqchen edited a comment on pull request #5426: URL: https://github.com/apache/incubator-tvm/pull/5426#issuecomment-618771249 cc @zhiics @icemelon9 @jroesch @wweic This is a PR that helps to prepare the std::string->String migration. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #5426: [PY][FFI] runtime.String to subclass str
tqchen commented on pull request #5426: URL: https://github.com/apache/incubator-tvm/pull/5426#issuecomment-618771249 cc @zhiics @icemelon9 @jroesch This is a PR that helps to prepare the std::string->String migration. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen opened a new pull request #5426: [PY][FFI] runtime.String to subclass str
tqchen opened a new pull request #5426: URL: https://github.com/apache/incubator-tvm/pull/5426 To make runtime.String to work as naturally as possible in the python side, we make it sub-class the python's str object. Note that however, we cannot sub-class Object at the same time due to python's type layout constraint( cannot subclass from multiple classes with slots). We introduce a PyNativeObject class to handle this kind of object sub-classing. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #5415: [TIR][REFACTOR] Remove ir_pass in favor of analysis/transform.
tqchen commented on pull request #5415: URL: https://github.com/apache/incubator-tvm/pull/5415#issuecomment-618767898 Good catch, dump_ir is temporary disabled because we can move to the new PassContext trace API. We should remove that line, @wpan11nv can you send a patch? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] wpan11nv commented on pull request #5415: [TIR][REFACTOR] Remove ir_pass in favor of analysis/transform.
wpan11nv commented on pull request #5415: URL: https://github.com/apache/incubator-tvm/pull/5415#issuecomment-618741145 The following line is broken after this change: https://github.com/apache/incubator-tvm/blob/master/python/tvm/driver/build_module.py#L161 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] hcho3 commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
hcho3 commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618732386 With https://github.com/dmlc/xgboost/pull/5590, I can now run `tests/python/unittest/test_autotvm_xgboost_model.py::test_fit` without crashing. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tmoreau89 commented on pull request #5425: [TFLite] Add config option to specify FlatBuffers location
tmoreau89 commented on pull request #5425: URL: https://github.com/apache/incubator-tvm/pull/5425#issuecomment-618721380 Thanks @michalpiszczek @tqchen, the PR has been merged This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (6faacc6 -> cf5c63b)
This is an automated email from the ASF dual-hosted git repository. moreau pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from 6faacc6 [MXNET]DepthToSpace & SpaceToDepth Operator (#5408) add cf5c63b Add option to specify flatbuffers location (#5425) No new revisions were added by this update. Summary of changes: cmake/config.cmake | 4 cmake/modules/contrib/TFLite.cmake | 7 +-- 2 files changed, 9 insertions(+), 2 deletions(-)
[GitHub] [incubator-tvm] michalpiszczek commented on pull request #5425: [TFLite] Add config option to specify FlatBuffers location
michalpiszczek commented on pull request #5425: URL: https://github.com/apache/incubator-tvm/pull/5425#issuecomment-618720491 PTAL This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner
tqchen edited a comment on pull request #5231: URL: https://github.com/apache/incubator-tvm/pull/5231#issuecomment-618704483 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner
tqchen edited a comment on pull request #5231: URL: https://github.com/apache/incubator-tvm/pull/5231#issuecomment-618705083 cc @junrushao1994 @ajtulloch @u99127 @yzhliu @abcdabcd987 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner
tqchen commented on pull request #5231: URL: https://github.com/apache/incubator-tvm/pull/5231#issuecomment-618705083 cc @junrushao1994 @ajtulloch @u99127 @yzhliu This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner
tqchen edited a comment on pull request #5231: URL: https://github.com/apache/incubator-tvm/pull/5231#issuecomment-618704483 Thanks @mbrookhart now that we have a concrete POC, it would be nice to have another round of ABI review with the folks, possibly open another thread at the dicuss forum to provide examples about what the relay.dataflow_pattern can do so far and get feedbacks about API choices. I think the design choices that would be in particular interesting are: - The dominator pattern API examples(since that was not very well covered) - The API of match, rewrite and partition This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner
tqchen commented on pull request #5231: URL: https://github.com/apache/incubator-tvm/pull/5231#issuecomment-618704483 Thanks @mbrookhart now that we have a concrete POC, it would be nice to have another round of ABI review with the folks, possibly open another thread at the dicuss forum to provide examples about what the relay.dataflow_pattern can do so far and get feedbacks about API choices. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner
tqchen commented on a change in pull request #5231: URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r414162532 ## File path: include/tvm/relay/dataflow_functor.h ## @@ -0,0 +1,248 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information Review comment: naming: perhaps it should be `dataflow_pattern_functor.h`? since `dataflow_functor` is a bit confusing ## File path: include/tvm/relay/dataflow_functor.h ## @@ -0,0 +1,248 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/*! + * \file tvm/relay/dataflow_matcher.h + * \brief A pattern matcher for matching dataflow properties. + */ +#ifndef TVM_RELAY_DATAFLOW_FUNCTOR_H_ +#define TVM_RELAY_DATAFLOW_FUNCTOR_H_ + +#include +#include +#include +#include +#include +#include + +namespace tvm { +namespace relay { + +/*! + * \brief A dynamical functor that dispatches on in the first DFPattern argument. + * + * \tparam FType function signiture + * This type is only defined for FType with function signature R(const DFPattern&, + * Args...) + */ +template +class DFPatternFunctor; + +// functions to be overriden. +#define DFPATTERN_FUNCTOR_DEFAULT \ + { return VisitDFPatternDefault_(op, std::forward(args)...); } + +#define RELAY_DFPATTERN_FUNCTOR_DISPATCH(OP) \ + vtable.template set_dispatch([](const ObjectRef& n, TSelf* self, Args... args) { \ +return self->VisitDFPattern_(static_cast(n.get()), std::forward(args)...); \ + }); + +template +class DFPatternFunctor { + private: + using TSelf = DFPatternFunctor; + using FType = tvm::NodeFunctor; + + public: + /*! \brief the result type of this functor */ + using result_type = R; + /*! \brief virtual destructor */ + virtual ~DFPatternFunctor() {} + /*! + * \brief Same as call. + * \param n The expression node. + * \param args Additional arguments. + * \return The result of the call + */ + R operator()(const DFPattern& n, Args... args) { +return VisitDFPattern(n, std::forward(args)...); + } + /*! + * \brief The functor call. + * \param n The expression node. + * \param args Additional arguments. + * \return The result of the call + */ + virtual R VisitDFPattern(const DFPattern& n, Args... args) { +CHECK(n.defined()); +static FType vtable = InitVTable(); +return vtable(n, this, std::forward(args)...); + } + // Functions that can be overriden by subclass + virtual R VisitDFPattern_(const AltPatternNode* op, Args... args) DFPATTERN_FUNCTOR_DEFAULT; + virtual R VisitDFPattern_(const AttrPatternNode* op, Args... args) DFPATTERN_FUNCTOR_DEFAULT; + virtual R VisitDFPattern_(const CallPatternNode* op, Args... args) DFPATTERN_FUNCTOR_DEFAULT; + virtual R VisitDFPattern_(const DominatorPatternNode* op, Args... args) DFPATTERN_FUNCTOR_DEFAULT; + virtual R VisitDFPattern_(const ExprPatternNode* op, Args... args) DFPATTERN_FUNCTOR_DEFAULT; + virtual R VisitDFPattern_(const TupleGetItemPatternNode* op, +Args... args) DFPATTERN_FUNCTOR_DEFAULT; + virtual R VisitDFPattern_(const TuplePatternNode* op, Args... args) DFPATTERN_FUNCTOR_DEFAULT; + virtual R VisitDFPattern_(const TypePatternNode* op, Args... args) DFPATTERN_FUNCTOR_DEFAULT; + virtual R VisitDFPattern_(const VarPatternNode* op, Args... args) DFPATTERN_FUNCTOR_DEFAULT; + virtual R VisitDFPattern_(const WildcardPatternNode* op, Args... args) DFPATTERN_FUNCTOR_DEFAULT; + virtual R VisitDFPatternDefault_(const Object* op, Args...) { +LOG(FATAL) << "Do not have a default for " << op->GetTypeKey(); +throw; + } + + private: + // initialize the vtable. + static FType InitVTable() { +FType vtable; +// Set dispatch +RELAY_DFPATTERN_FUNCTOR_DISPATCH(AltPatternNode); +RELAY_DFPATTERN_FUNCTOR_DISPATCH(AttrPatternNode); +RELAY_DFPATTERN_FUNCTOR_DISPATCH(CallPatternNode); +RELAY_DFPATTERN_FUNCTOR_DISPATCH(DominatorPatternNode); +RELAY_DFPATTERN_FUNCTOR_DISPATCH(ExprPatternNode); +
[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5367: Improve IntervalSet's floormod
tqchen commented on a change in pull request #5367: URL: https://github.com/apache/incubator-tvm/pull/5367#discussion_r414159969 ## File path: include/tvm/arith/int_set.h ## @@ -152,6 +152,22 @@ class IntSet : public ObjectRef { //--- // Integer set legacy API. // +/*! + * \brief Convert std::unordered_map to Map + * + * \param dom_map The domain map to convert. + * \return The converted map. + */ +Map ConvertDomMap(const std::unordered_map& dom_map); +// /*! +// * \brief Find an symbolic integer set that contains all possible values of +// * e given the domain of each iteration variables. +// * +// * \param e The expression to be evaluated. +// * \param dom_map The domain of each variable. +// * \return An integer set that can cover all the possible values of e. +// */ +// IntSet EvalSet(PrimExpr e, const Map& dom_map); Review comment: keep the commented region? ## File path: src/arith/int_set.cc ## @@ -311,6 +311,21 @@ inline IntervalSet Combine(Analyzer* analyzer, LOG(FATAL) << "Modular by zero in CombineInterval Mod"; } if (analyzer->CanProveGreaterEqual(divisor, 0)) { + if (const auto* ptr = b->min_value.as()) { +// a mod b = a - b * (a/b) if +// (i) a_max - a_min < b, i.e. that before mod, a's range doesn't cover [0, b) +// and (ii) a_min mod b <= a_max mod b, i.e. that a's range is still continuous after mod +auto tmax = a->max_value - b->min_value * floordiv(a->max_value, b->min_value); +tmax = analyzer->Simplify(tmax); +auto tmin = a->min_value - b->min_value * floordiv(a->min_value, b->min_value); +tmin = analyzer->Simplify(tmin); +auto tset = IntervalSet(tmin, tmax); +bool within_range = analyzer->CanProveLess(a->max_value - a->min_value, ptr->value); +bool wrap_around = analyzer->CanProve(tset->max_value < tset->min_value); +if (within_range && !wrap_around) { + return tset; Review comment: Consider move the tset construction after the within range check. ## File path: src/arith/int_set.cc ## @@ -311,6 +311,21 @@ inline IntervalSet Combine(Analyzer* analyzer, LOG(FATAL) << "Modular by zero in CombineInterval Mod"; } if (analyzer->CanProveGreaterEqual(divisor, 0)) { + if (const auto* ptr = b->min_value.as()) { +// a mod b = a - b * (a/b) if +// (i) a_max - a_min < b, i.e. that before mod, a's range doesn't cover [0, b) +// and (ii) a_min mod b <= a_max mod b, i.e. that a's range is still continuous after mod +auto tmax = a->max_value - b->min_value * floordiv(a->max_value, b->min_value); +tmax = analyzer->Simplify(tmax); +auto tmin = a->min_value - b->min_value * floordiv(a->min_value, b->min_value); +tmin = analyzer->Simplify(tmin); +auto tset = IntervalSet(tmin, tmax); +bool within_range = analyzer->CanProveLess(a->max_value - a->min_value, ptr->value); +bool wrap_around = analyzer->CanProve(tset->max_value < tset->min_value); Review comment: This can be dangerious, since CanProve returns true if it can be proven, but if it is false, it does not mean that the condition won't hold. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on issue #5384: [ARITH] Merge Impl of Extended Euclidean
tqchen commented on issue #5384: URL: https://github.com/apache/incubator-tvm/issues/5384#issuecomment-618699147 ping @yzhliu :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated: [MXNET]DepthToSpace & SpaceToDepth Operator (#5408)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git The following commit(s) were added to refs/heads/master by this push: new 6faacc6 [MXNET]DepthToSpace & SpaceToDepth Operator (#5408) 6faacc6 is described below commit 6faacc6f9ed2f0ed59dda3216710e4df489210e4 Author: Samuel AuthorDate: Fri Apr 24 03:35:25 2020 +0530 [MXNET]DepthToSpace & SpaceToDepth Operator (#5408) --- python/tvm/relay/frontend/mxnet.py | 16 ++ tests/python/frontend/mxnet/test_forward.py | 34 + 2 files changed, 50 insertions(+) diff --git a/python/tvm/relay/frontend/mxnet.py b/python/tvm/relay/frontend/mxnet.py index 4edf0b8..be2f110 100644 --- a/python/tvm/relay/frontend/mxnet.py +++ b/python/tvm/relay/frontend/mxnet.py @@ -1073,6 +1073,20 @@ def _mx_one_hot(inputs, attrs): return _op.one_hot(indices, on_value, off_value, depth, -1, dtype) +def _mx_depth_to_space(inputs, attrs): +assert len(inputs) == 1 +new_attrs = {} +new_attrs["block_size"] = attrs.get_int("block_size") +return _op.nn.depth_to_space(*inputs, **new_attrs) + + +def _mx_space_to_depth(inputs, attrs): +assert len(inputs) == 1 +new_attrs = {} +new_attrs["block_size"] = attrs.get_int("block_size") +return _op.nn.space_to_depth(*inputs, **new_attrs) + + def _mx_contrib_fifo_buffer(inputs, attrs): new_attrs = {} new_attrs['axis'] = attrs.get_int('axis') @@ -1854,6 +1868,8 @@ _convert_map = { "make_loss" : _mx_make_loss, "_contrib_div_sqrt_dim": _mx_contrib_div_sqrt_dim, "one_hot" : _mx_one_hot, +"depth_to_space": _mx_depth_to_space, +"space_to_depth": _mx_space_to_depth, # vision "_contrib_BilinearResize2D" : _mx_resize, "_contrib_MultiBoxPrior" : _mx_multibox_prior, diff --git a/tests/python/frontend/mxnet/test_forward.py b/tests/python/frontend/mxnet/test_forward.py index 4a9848e..10edff9 100644 --- a/tests/python/frontend/mxnet/test_forward.py +++ b/tests/python/frontend/mxnet/test_forward.py @@ -995,6 +995,38 @@ def test_forward_swap_axis(): # _verify_swap_axis((4, 5), (5, 4), 0, 0) +def test_forward_depth_to_space(): +def verify(shape, blocksize=2): +x = np.random.uniform(size=shape).astype("float32") +ref_res = mx.nd.depth_to_space(mx.nd.array(x), blocksize) +mx_sym = mx.sym.depth_to_space(mx.sym.var("x"), blocksize) +shape_dict = {"x": x.shape, } +mod, _ = relay.frontend.from_mxnet(mx_sym, shape_dict) +for target, ctx in ctx_list(): +for kind in ["graph", "debug"]: +intrp = relay.create_executor(kind, mod=mod, ctx=ctx, target=target) +op_res = intrp.evaluate()(x) +tvm.testing.assert_allclose(op_res.asnumpy(), ref_res.asnumpy(), rtol=1e-3, atol=1e-5) + +verify((1, 18, 3, 3), 3) + + +def test_forward_space_to_depth(): +def verify(shape, blocksize=2): +x = np.random.uniform(size=shape).astype("float32") +ref_res = mx.nd.space_to_depth(mx.nd.array(x), blocksize) +mx_sym = mx.sym.space_to_depth(mx.sym.var("x"), blocksize) +shape_dict = {"x": x.shape, } +mod, _ = relay.frontend.from_mxnet(mx_sym, shape_dict) +for target, ctx in ctx_list(): +for kind in ["graph", "debug"]: +intrp = relay.create_executor(kind, mod=mod, ctx=ctx, target=target) +op_res = intrp.evaluate()(x) +tvm.testing.assert_allclose(op_res.asnumpy(), ref_res.asnumpy(), rtol=1e-3, atol=1e-5) + +verify((1, 1, 9, 9), 3) + + if __name__ == '__main__': test_forward_mlp() test_forward_vgg() @@ -1047,6 +1079,8 @@ if __name__ == '__main__': test_forward_instance_norm() test_forward_layer_norm() test_forward_one_hot() +test_forward_depth_to_space() +test_forward_space_to_depth() test_forward_convolution() test_forward_deconvolution() test_forward_cond()
[GitHub] [incubator-tvm] tqchen commented on pull request #5421: [RFC] Pytest environment improvements
tqchen commented on pull request #5421: URL: https://github.com/apache/incubator-tvm/pull/5421#issuecomment-618695859 Thanks @u99127 @tom-gall This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated: [RFC] Pytest environment improvements (#5421)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git The following commit(s) were added to refs/heads/master by this push: new e149db2 [RFC] Pytest environment improvements (#5421) e149db2 is described below commit e149db2830db5fca86b34afbec60fca05ac4179d Author: Ramana Radhakrishnan AuthorDate: Thu Apr 23 23:05:03 2020 +0100 [RFC] Pytest environment improvements (#5421) * [RFC] Pass pytest options globally. In many places having a global pytest flag is useful . For me with the build and test of tvm , I would like to be able to globally pass in pytest options as part of development flow or CI flows where one would like to measure other things regularly that need measurements including pytest coverage data that I would like to experiment with across the stack. This has been achieved with an additional setup-pytest-env.sh file in tests/scripts rather than putting in something in every single task test script and something I would like to avoid. This now means the -v option to pytest is superfluous. I did consider having a pytest.ini file but that doesn't allow me to pass any old environment variable in and this seems to be the compromise. * Improve other use case documentation * Rationalize pytest environment. * Remove the setting from docker/with_same_user. * Take the opportunity to migrate common PYTHONPATH and TVM_PATH into the common environment setting. * Fixup vta fsim * Be more explicit with common PYTHONPATH * Fix python path for task_python_vta_fsim.sh properly * Fix nit in documentation. --- docker/bash.sh | 1 + docker/build.sh | 1 + docker/with_the_same_user | 1 + docs/contribute/pull_request.rst| 3 +++ .../{task_python_topi.sh => setup-pytest-env.sh}| 15 --- tests/scripts/task_python_docs.sh | 1 + tests/scripts/task_python_frontend.sh | 20 ++-- tests/scripts/task_python_integration.sh| 21 +++-- tests/scripts/task_python_nightly.sh| 4 ++-- tests/scripts/task_python_topi.sh | 5 ++--- tests/scripts/task_python_unittest.sh | 6 +++--- tests/scripts/task_python_vta_fsim.sh | 8 tests/scripts/task_python_vta_tsim.sh | 8 13 files changed, 47 insertions(+), 47 deletions(-) diff --git a/docker/bash.sh b/docker/bash.sh index 61823f9..a6aab53 100755 --- a/docker/bash.sh +++ b/docker/bash.sh @@ -89,6 +89,7 @@ ${DOCKER_BINARY} run --rm --pid=host\ -e "CI_BUILD_GROUP=$(id -g -n)" \ -e "CI_BUILD_GID=$(id -g)" \ -e "PYTHONPATH=python:topi/python"\ +-e "CI_PYTEST_ADD_OPTIONS=$CI_PYTEST_ADD_OPTIONS" \ ${CUDA_ENV}\ ${CI_DOCKER_EXTRA_PARAMS[@]} \ ${DOCKER_IMAGE_NAME}\ diff --git a/docker/build.sh b/docker/build.sh index defa282..d5925dc 100755 --- a/docker/build.sh +++ b/docker/build.sh @@ -162,6 +162,7 @@ ${DOCKER_BINARY} run --rm --pid=host \ -e "CI_BUILD_UID=$(id -u)" \ -e "CI_BUILD_GROUP=$(id -g -n)" \ -e "CI_BUILD_GID=$(id -g)" \ +-e "CI_PYTEST_ADD_OPTIONS=$CI_PYTEST_ADD_OPTIONS" \ ${CUDA_ENV}\ ${CI_DOCKER_EXTRA_PARAMS[@]} \ ${DOCKER_IMG_NAME} \ diff --git a/docker/with_the_same_user b/docker/with_the_same_user index 1288afd..2338f63 100644 --- a/docker/with_the_same_user +++ b/docker/with_the_same_user @@ -41,6 +41,7 @@ getent passwd "${CI_BUILD_UID}" || adduser --gid "${CI_BUILD_GID}" --uid "${CI_B --gecos "${CI_BUILD_USER} (generated by with_the_same_user script)" \ --disabled-password --home "${CI_BUILD_HOME}" --quiet "${CI_BUILD_USER}" usermod -a -G sudo "${CI_BUILD_USER}" +# This is a grotesque hack to get PYTEST_ADD_OPTS available to all task scripts. echo "${CI_BUILD_USER} ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/90-nopasswd-sudo if [[ ! -z $CUDA_VISIBLE_DEVICES ]]; then diff --git a/docs/contribute/pull_request.rst b/docs/contribute/pull_request.rst index 51626a1..25dc0da 100644 --- a/docs/contribute/pull_request.rst +++ b/docs/contribute/pull_request.rst @@ -118,3 +118,6 @@ If you want to run a single test: rm -rf python/tvm/*.pyc python/tvm/*/*.pyc python/tvm/*/*/*.pyc TVM_FFI=ctypes python -m pytest -v tests/python/unittest/test_pass_storage_rewrite.py + + # Additionally if you want to run a single test, for example test_all_elemwise inside a file. + TVM_FFI=ctypes python -m pytest -v -k "test_all_elemwise" tests/python/frontend/tflite/test_forward.py diff --git a/tests/scripts/task_python_topi.sh b/tests/scripts/setup-pytest-env.sh similarity index 82% copy from tests/scripts/task_python_topi.sh copy to
[GitHub] [incubator-tvm] junrushao1994 commented on pull request #5423: [RUNTIME][OBJECT] Introduce static slots for common objects.
junrushao1994 commented on pull request #5423: URL: https://github.com/apache/incubator-tvm/pull/5423#issuecomment-618689683 I think it is good for now. We can discuss about the additional slots later for potential inheritance. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] trivialfis edited a comment on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
trivialfis edited a comment on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618683105 We need to be careful about this. Rabit and dmlc core are independently built, I'm not sure what will happen if they throw an error, as hiding symbols means exception can not be propagated out. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] trivialfis commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
trivialfis commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618687657 I believe if it works it will be a net gain for XGBoost, hiding symbols is a good practice for shared libraries. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] trivialfis commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
trivialfis commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618686867 Just make sure error is not thrown in header. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] trivialfis commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
trivialfis commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618686580 Before upgrading libc dependency, we can try to add a test that forces rabit to throw an error, see if it crashes XGBoost with segfault. An uneven all reduce can do. Or a test with dmlc core, a file nonexist error seems to be simple. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] hcho3 edited a comment on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
hcho3 edited a comment on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618685136 Got it. How about compiling the wheel using latest Ubuntu (not CentOS) and put it in a S3 bucket? The TVM CI can pull from this bucket instead of PyPI. Current build environment for the Pip wheel is quite old: CentOS 6 + devtoolset-4 (GCC 5.x). When we drop CUDA 9.0, we can upgrade the build environment to CentOS 6 + devtoolset-6 (GCC 7.x). Upgrade may fix the issue. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] hcho3 edited a comment on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
hcho3 edited a comment on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618685136 Got it. How about compiling the wheel using latest Ubuntu (not CentOS) and put it in a S3 bucket? The TVM CI can pull from this bucket instead of PyPI. Current build environment for the Pip wheel is quite old: CentOS 6 + devtoolset-4 (GCC 5.x). When we drop CUDA 9.0 support, we can upgrade the build environment to CentOS 6 + devtoolset-6 (GCC 7.x). Upgrade may fix the issue. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] hcho3 commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
hcho3 commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618685136 Got it. How about compiling the wheel using latest Ubuntu (not CentOS) and put it in a S3 bucket? The TVM CI can pull from this bucket instead of PyPI. Current build environment for the Pip wheel is quite old: CentOS 6 + devtoolset-4 (GCC 5.x). When we drop CUDA 9.0, we can upgrade to CentOS 6 + devtoolset-6 (GCC 7.x). Upgrade may fix the issue. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] u99127 edited a comment on pull request #5394: [TFLITE]Quantize & Dequantize op
u99127 edited a comment on pull request #5394: URL: https://github.com/apache/incubator-tvm/pull/5394#issuecomment-618499720 > > > @inadob Thanks. I will give a try. . (withdraw my comment) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] u99127 edited a comment on pull request #5394: [TFLITE]Quantize & Dequantize op
u99127 edited a comment on pull request #5394: URL: https://github.com/apache/incubator-tvm/pull/5394#issuecomment-618499720 > > > @inadob Thanks. I will give a try. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] trivialfis commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
trivialfis commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618683383 Not entirely sure in the context of static linking. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] trivialfis commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
trivialfis commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618683105 We need to be careful about this. Rabit and dmlc core are independently built, I'm not sure what will happen if they throws an error, as hiding symbols means exception can not be propagated out. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] hcho3 commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
hcho3 commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618680692 @trivialfis Let me file a pull request. We'll include the fix as part of the upcoming 1.1.0 release. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] trivialfis commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
trivialfis commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618679480 @hcho3 Yup. Good idea. We can make it a CMake option: https://stackoverflow.com/questions/17080869/what-is-the-cmake-equivalent-to-gcc-fvisibility-hidden-when-controlling-the-e This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] hcho3 commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
hcho3 commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618678730 @trivialfis I think we can hide all C++ symbols when building Python wheels. WDYT? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] hcho3 edited a comment on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
hcho3 edited a comment on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618678730 @trivialfis I think we can hide all C++ symbols when building Python wheels. I don't think anyone using C++ headers would use the Pip wheel. WDYT? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] trivialfis commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
trivialfis commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618678199 It hides all the symbols, except for C APIs. So if anyone's using C++ header, it might generate a lots of linker errors. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] hcho3 commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
hcho3 commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618677643 @trivialfis Can you elaborate what your patch does? Does it hide certain symbols? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] trivialfis commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
trivialfis commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618675737 @hcho3 Yup. It works fine on my machine too This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] areusch commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
areusch commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618675785 @hcho3 tested with your new wheel on my aws instance and the test now passes! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] hcho3 commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
hcho3 commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618673399 @trivialfis I applied your patch and changed CMake flags. And now the unit test does not crash any more. You should try it too. Get the wheel at https://xgboost-wheels.s3-us-west-2.amazonaws.com/xgboost-1.0.2-py3-none-manylinux1_x86_64.whl. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] zhiics commented on pull request #5410: [BYOC] Use Non-Recursive Visitor/Mutator
zhiics commented on pull request #5410: URL: https://github.com/apache/incubator-tvm/pull/5410#issuecomment-618665721 Thanks @comaniac @mbrookhart @Somisary This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated: [BYOC] Use Non-Recursive Visitor/Mutator (#5410)
This is an automated email from the ASF dual-hosted git repository. zhic pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git The following commit(s) were added to refs/heads/master by this push: new ba87604 [BYOC] Use Non-Recursive Visitor/Mutator (#5410) ba87604 is described below commit ba8760462bb56c3a5571bb00a2f64355d98d1b43 Author: Cody Yu AuthorDate: Thu Apr 23 13:56:43 2020 -0700 [BYOC] Use Non-Recursive Visitor/Mutator (#5410) * Non-Recursive AnnotatedTarget and MergeAnnotation * Non-Recursive AnnotatedRegionSet and RegionMerger --- src/relay/analysis/annotated_region_set.cc | 133 src/relay/transforms/annotate_target.cc | 85 +++ src/relay/transforms/merge_compiler_regions.cc | 14 +-- tests/python/relay/test_pass_partition_graph.py | 67 ++-- 4 files changed, 144 insertions(+), 155 deletions(-) diff --git a/src/relay/analysis/annotated_region_set.cc b/src/relay/analysis/annotated_region_set.cc index 94c7621..103ddcb 100644 --- a/src/relay/analysis/annotated_region_set.cc +++ b/src/relay/analysis/annotated_region_set.cc @@ -86,32 +86,69 @@ AnnotatedRegion AnnotatedRegionSetNode::MakeRegion(const std::string& target) { return *ret.first; } -class AnnotatedRegionSet::Creator : public ExprVisitor { +class AnnotatedRegionSet::Creator : protected MixedModeVisitor { public: Creator(const Op& region_begin_op, const Op& region_end_op) : begin_op_(region_begin_op), end_op_(region_end_op) {} + AnnotatedRegionSet Create(const Expr& expr) { +VisitExpr(expr); +return std::move(region_set_); + } + + void AddToArgRegion(Expr expr, Array args) { +// Merge argument regions and add itself to the region. + +// Find the first open region. +AnnotatedRegion region; +for (auto arg : args) { + const CallNode* end = arg.as(); + if (end && end->op == end_op_) { // Ignore closed regions. + continue; + } + + region = region_set_->GetRegion(arg); + if (region.defined()) { + break; + } +} + +// Try to merge open regions. +for (auto arg : args) { + const CallNode* end = arg.as(); + if (end && end->op == end_op_) { // Ignore closed regions. + continue; + } + + auto arg_region = region_set_->GetRegion(arg); + CHECK_EQ(region.defined(), arg_region.defined()) + << "Arg regions are inconsistent: " << AsText(expr); + if (region.defined() && region != arg_region) { +region_set_->MergeRegions(arg_region, region); + } +} +if (region.defined()) { + region_set_->AddToRegion(region, expr); +} + } + void VisitExpr_(const CallNode* call) { auto op_node = call->op.as(); if (op_node == nullptr || call->attrs.as() == nullptr) { - // Propagate region to arguments - auto region = region_set_->GetRegion(GetRef(call)); - if (region.defined()) { -for (auto arg : call->args) { - region_set_->AddToRegion(region, arg); -} - } + AddToArgRegion(GetRef(call), call->args); } else if (call->op == begin_op_) { // The annotation node is inserted on edge so it must have only one argument. CHECK_EQ(call->args.size(), 1U); + std::string target = call->attrs.as()->compiler; + // Check if the argument already belongs to a region auto region = region_set_->GetRegion(GetRef(call)); - if (!region.defined()) { -throw Error(ErrorBuilder() - << "Cannot find the corresponding region for start annotation:\n" - << AsText(GetRef(call), false)); - } + CHECK(!region.defined()); + + // Create a new region. + region = region_set_->MakeRegion(target); + region->nodes_.insert(GetRef(call)); region->ins_.push_back(GetRef(call)); } else { CHECK_EQ(call->op, end_op_); @@ -122,9 +159,8 @@ class AnnotatedRegionSet::Creator : public ExprVisitor { // Check if the argument already belongs to a region auto region = region_set_->GetRegion(call->args[0]); if (!region.defined()) { -// Create a new region if the argument is not belonged to any regions yet. -region = region_set_->MakeRegion(target); -region->nodes_.insert(call->args[0]); +throw Error(ErrorBuilder() << "Cannot find the corresponding region for end annotation:\n" + << AsText(GetRef(call), false)); } else { // If the argument is belonged to a region, it must have the same target. // Otherwise we should see a region_begin op. @@ -133,83 +169,44 @@ class AnnotatedRegionSet::Creator : public ExprVisitor { region->nodes_.insert(GetRef(call)); region->outs_.push_back(GetRef(call)); } -ExprVisitor::VisitExpr_(call); - } - - AnnotatedRegionSet Create(const Expr& expr)
[GitHub] [incubator-tvm] trivialfis commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
trivialfis commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618661450 @hcho3 Could you try applying the patches I posted above and use the corresponding cmake flags? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] u99127 edited a comment on pull request #5394: [TFLITE]Quantize & Dequantize op
u99127 edited a comment on pull request #5394: URL: https://github.com/apache/incubator-tvm/pull/5394#issuecomment-618499720 > > > @inadob Thanks. I will give a try. I have a more fundamental question - I don't expect Quantize and Dequantize to show up in Tflite models for inference as IIUC these are operators that will appear in the training loop. This is purely curiosity. Ramana This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] hcho3 edited a comment on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
hcho3 edited a comment on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618656005 Finally, I reproduced it. Yes! Note: I used latest TVM as of today. The crash still occured. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] hcho3 commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
hcho3 commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618656005 Finally, I reproduced it. Yes! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] jroesch commented on pull request #5144: [Relay][VM] Memory planner (part 1)
jroesch commented on pull request #5144: URL: https://github.com/apache/incubator-tvm/pull/5144#issuecomment-618651343 @icemelon9 yes my intention is we can in theory dynamically compute everything like malloc or pool allocators. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] hcho3 commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
hcho3 commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618649425 @areusch Can you try out the latest TVM master on your end? I'm still having trouble reproducing. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] hcho3 edited a comment on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
hcho3 edited a comment on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618649425 @areusch Can you try out the latest TVM master on your end? I'm still having trouble reproducing the original issue. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] trivialfis edited a comment on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
trivialfis edited a comment on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618640842 @hcho3 One last request, otherwise I'm running out of ideas. Patch both xgboost and rabit's C API macro: For xgboost: ``` patch diff --git a/include/xgboost/c_api.h b/include/xgboost/c_api.h index f9c0a0ff..baaaeb43 100644 --- a/include/xgboost/c_api.h +++ b/include/xgboost/c_api.h @@ -20,7 +20,7 @@ #if defined(_MSC_VER) || defined(_WIN32) #define XGB_DLL XGB_EXTERN_C __declspec(dllexport) #else -#define XGB_DLL XGB_EXTERN_C +#define XGB_DLL XGB_EXTERN_C __attribute__ ((visibility ("default"))) #endif // defined(_MSC_VER) || defined(_WIN32) // manually define unsigned long ``` For rabit: ``` patch diff --git a/include/rabit/c_api.h b/include/rabit/c_api.h index 0a96ef7..47c5735 100644 --- a/include/rabit/c_api.h +++ b/include/rabit/c_api.h @@ -18,7 +18,7 @@ #if defined(_MSC_VER) || defined(_WIN32) #define RABIT_DLL RABIT_EXTERN_C __declspec(dllexport) #else -#define RABIT_DLL RABIT_EXTERN_C +#define RABIT_DLL RABIT_EXTERN_C __attribute__ ((visibility ("default"))) #endif // defined(_MSC_VER) || defined(_WIN32) /*! \brief rabit unsigned long type */ ``` Build XGBoost with following flags appended: ``` -DCMAKE_CXX_FLAGS='-fvisibility=hidden' -DCMAKE_C_FLAGS='-fvisibility=hidden' ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] trivialfis edited a comment on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
trivialfis edited a comment on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618645218 @hcho3 Yes. Currently detached at the commit before above linked PR. ``` fis@fis-Standard-PC-Q35-ICH9-2009:~/Workspace/XGBoost/incubator-tvm$ git status HEAD detached at 56941fb9d Untracked files: (use "git add ..." to include in what will be committed) .gdb_history nothing added to commit but untracked files present (use "git add" to track) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] trivialfis commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
trivialfis commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618645218 @hcho3 Yes. ``` fis@fis-Standard-PC-Q35-ICH9-2009:~/Workspace/XGBoost/incubator-tvm$ git status HEAD detached at 56941fb9d Untracked files: (use "git add ..." to include in what will be committed) .gdb_history nothing added to commit but untracked files present (use "git add" to track) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] hcho3 commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
hcho3 commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618644587 @trivialfis And you ran `git submodule update --init --recursive`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] trivialfis commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
trivialfis commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618642163 @hcho3 I tried master branch and the commit before: > I wonder if that was due to inconsistency between dmlc-core of tvm and xgb. #5401 updated the logging to latest, please check again. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] hcho3 commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
hcho3 commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618641439 @trivialfis Did you update TVM to latest? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] trivialfis commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
trivialfis commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618640842 @hcho3 One last request, otherwise I'm running out of ideas. Patch both xgboot and rabit's C API macro: For xgboost: ``` patch diff --git a/include/xgboost/c_api.h b/include/xgboost/c_api.h index f9c0a0ff..baaaeb43 100644 --- a/include/xgboost/c_api.h +++ b/include/xgboost/c_api.h @@ -20,7 +20,7 @@ #if defined(_MSC_VER) || defined(_WIN32) #define XGB_DLL XGB_EXTERN_C __declspec(dllexport) #else -#define XGB_DLL XGB_EXTERN_C +#define XGB_DLL XGB_EXTERN_C __attribute__ ((visibility ("default"))) #endif // defined(_MSC_VER) || defined(_WIN32) // manually define unsigned long ``` For rabit: ``` patch diff --git a/include/rabit/c_api.h b/include/rabit/c_api.h index 0a96ef7..47c5735 100644 --- a/include/rabit/c_api.h +++ b/include/rabit/c_api.h @@ -18,7 +18,7 @@ #if defined(_MSC_VER) || defined(_WIN32) #define RABIT_DLL RABIT_EXTERN_C __declspec(dllexport) #else -#define RABIT_DLL RABIT_EXTERN_C +#define RABIT_DLL RABIT_EXTERN_C __attribute__ ((visibility ("default"))) #endif // defined(_MSC_VER) || defined(_WIN32) /*! \brief rabit unsigned long type */ ``` Build XGBoost with following flags appended: ``` -DCMAKE_CXX_FLAGS='-fvisibility=hidden' -DCMAKE_C_FLAGS='-fvisibility=hidden' ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5423: [RUNTIME][OBJECT] Introduce static slots for common objects.
tqchen edited a comment on pull request #5423: URL: https://github.com/apache/incubator-tvm/pull/5423#issuecomment-618627418 The inhertitance will still work for 3rdparty, because overflow is enabled in all of these cases, the support path will be like the same before this PR. we could discuss whether or not we want to reserve some slots when we have a good idea of potential additional inheritence. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #5423: [RUNTIME][OBJECT] Introduce static slots for common objects.
tqchen commented on pull request #5423: URL: https://github.com/apache/incubator-tvm/pull/5423#issuecomment-618627418 The inhertitance will still work for 3rdparty, because overflow is enabled in all these cases, the support path will be like the same before this PR. we could discuss whether or not we want to reserve some slots when we have a good idea of potential additional inheritence. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated: [DOCS] Migrate some markdowns to rst, fix sphinx3 warnings (#5416)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git The following commit(s) were added to refs/heads/master by this push: new 1f6c498 [DOCS] Migrate some markdowns to rst, fix sphinx3 warnings (#5416) 1f6c498 is described below commit 1f6c498bcb37ae7106464075f62aecfbb9d681e4 Author: Tianqi Chen AuthorDate: Thu Apr 23 12:40:11 2020 -0700 [DOCS] Migrate some markdowns to rst, fix sphinx3 warnings (#5416) * [DOCS] Migrate some markdowns to rst, fix sphinx3 warnings * Add note block --- docs/api/python/runtime.rst | 25 --- docs/deploy/android.md| 39 --- docs/deploy/android.rst | 42 docs/deploy/cpp_deploy.md | 52 --- docs/deploy/cpp_deploy.rst| 56 docs/deploy/integrate.md | 67 --- docs/deploy/integrate.rst | 69 docs/install/nnpack.md| 100 docs/install/nnpack.rst | 118 ++ tests/scripts/task_sphinx_precheck.sh | 2 +- 10 files changed, 286 insertions(+), 284 deletions(-) diff --git a/docs/api/python/runtime.rst b/docs/api/python/runtime.rst index 30d1b98..c51a2d4 100644 --- a/docs/api/python/runtime.rst +++ b/docs/api/python/runtime.rst @@ -23,28 +23,3 @@ tvm.runtime :imported-members: :exclude-members: NDArray :autosummary: - - -.. autoclass:: tvm.runtime.PackedFunc - :members: - :inherited-members: - -.. autofunction:: tvm.register_func - -.. autofunction:: tvm.get_global_func - - -.. autoclass:: tvm.runtime.Module - :members: - -.. autofunction:: tvm.runtime.load_module - -.. autofunction:: tvm.runtime.system_lib - -.. autofunction:: tvm.runtime.enabled - - -.. autoclass:: tvm.runtime.Object - :members: - -.. autofunction:: tvm.register_object diff --git a/docs/deploy/android.md b/docs/deploy/android.md deleted file mode 100644 index 788ab41..000 --- a/docs/deploy/android.md +++ /dev/null @@ -1,39 +0,0 @@ - - - - - - - - - - - - - - - - - -# Deploy to Android - - -## Build model for Android Target - -Relay compilation of model for android target could follow same approach like android_rpc. -The code below will save the compilation output which is required on android target. - -``` -lib.export_library("deploy_lib.so", ndk.create_shared) -with open("deploy_graph.json", "w") as fo: -fo.write(graph.json()) -with open("deploy_param.params", "wb") as fo: -fo.write(relay.save_param_dict(params)) -``` - -deploy_lib.so, deploy_graph.json, deploy_param.params will go to android target. - -## TVM Runtime for Android Target - -Refer [here](https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/README.md#build-and-installation) to build CPU/OpenCL version flavor TVM runtime for android target. -From android java TVM API to load model & execute can be referred at this [java](https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java) sample source. diff --git a/docs/deploy/android.rst b/docs/deploy/android.rst new file mode 100644 index 000..c724eab --- /dev/null +++ b/docs/deploy/android.rst @@ -0,0 +1,42 @@ +.. Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + +..http://www.apache.org/licenses/LICENSE-2.0 + +.. Unless required by applicable law or agreed to in writing, +software distributed under the License is distributed on an +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +under the License. + +Deploy to Android += + +Build model for Android Target +-- + +Relay compilation of model for android target could follow same approach like android_rpc. +The code below will save the compilation output which is required on android target. + + +.. code:: python + +lib.export_library("deploy_lib.so", ndk.create_shared) +with open("deploy_graph.json", "w") as fo: +fo.write(graph.json()) +with open("deploy_param.params", "wb") as fo: +fo.write(relay.save_param_dict(params)) + +deploy_lib.so, deploy_graph.json, deploy_param.params will go to android target. + +TVM Runtime for Android Target +-- + +Refer `here
[incubator-tvm] branch master updated (d81a4fa -> 1f6c498)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from d81a4fa [CI] Migrate Tensorflow and Tensorflow lite in CI to 2.1.0 (#5392) add 1f6c498 [DOCS] Migrate some markdowns to rst, fix sphinx3 warnings (#5416) No new revisions were added by this update. Summary of changes: docs/api/python/runtime.rst | 25 --- docs/deploy/android.md| 39 --- docs/deploy/android.rst | 42 docs/deploy/cpp_deploy.md | 52 --- docs/deploy/cpp_deploy.rst| 56 docs/deploy/integrate.md | 67 --- docs/deploy/integrate.rst | 69 docs/install/nnpack.md| 100 docs/install/nnpack.rst | 118 ++ tests/scripts/task_sphinx_precheck.sh | 2 +- 10 files changed, 286 insertions(+), 284 deletions(-) delete mode 100644 docs/deploy/android.md create mode 100644 docs/deploy/android.rst delete mode 100644 docs/deploy/cpp_deploy.md create mode 100644 docs/deploy/cpp_deploy.rst delete mode 100644 docs/deploy/integrate.md create mode 100644 docs/deploy/integrate.rst delete mode 100644 docs/install/nnpack.md create mode 100644 docs/install/nnpack.rst
[GitHub] [incubator-tvm] tmoreau89 commented on pull request #5425: [TFLite] Add config option to specify FlatBuffers location
tmoreau89 commented on pull request #5425: URL: https://github.com/apache/incubator-tvm/pull/5425#issuecomment-618604203 @ZihengJiang @tqchen This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #5392: [CI] Migrate Tensorflow and Tensorflow lite in CI to 2.1.0
tqchen commented on pull request #5392: URL: https://github.com/apache/incubator-tvm/pull/5392#issuecomment-618604193 This PR is merged. WIll update the thread when the CI binary get updated This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (9c12ec8 -> d81a4fa)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from 9c12ec8 [cuDNN] Add cuDNN grouped convolutions support (#5319) add d81a4fa [CI] Migrate Tensorflow and Tensorflow lite in CI to 2.1.0 (#5392) No new revisions were added by this update. Summary of changes: docker/install/ubuntu_install_tensorflow.sh | 2 +- docker/install/ubuntu_install_tflite.sh | 6 +++--- tests/python/frontend/tensorflow/test_forward.py | 9 + tests/python/frontend/tflite/test_forward.py | 18 ++ 4 files changed, 27 insertions(+), 8 deletions(-)
[GitHub] [incubator-tvm] michalpiszczek commented on pull request #5425: [TFLite] Add config option to specify FlatBuffers location
michalpiszczek commented on pull request #5425: URL: https://github.com/apache/incubator-tvm/pull/5425#issuecomment-618603153 @tmoreau89 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] michalpiszczek opened a new pull request #5425: [TFLite] Add config option to specify FlatBuffers location
michalpiszczek opened a new pull request #5425: URL: https://github.com/apache/incubator-tvm/pull/5425 Adds an option to `config.cmake` for specifying the location of FlatBuffers when building with TFLite ON. This option _must_ be set when building full TVM with TFLite ON, but is not required to build just the runtime with TFLite ON. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] areusch commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
areusch commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618595019 @hcho3 do you still need an AMI from me? I think you can repro by using the AMI I mentioned [earlier](https://github.com/apache/incubator-tvm/issues/4953#issuecomment-617491801) ami: `099720109477/ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-20200408` it should be enough to just try and run the tvm test using a pip installed xgboost. I can build you another if it would help. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] wpan11nv edited a comment on pull request #5424: [CodeGen] Cleanup generated code
wpan11nv edited a comment on pull request #5424: URL: https://github.com/apache/incubator-tvm/pull/5424#issuecomment-618591146 Notably extra white spaces or scopes are removed. When the loop nesting or expression is deep, the emitted becomes unreadable. Before: ``` c++ extern "C" __global__ void test_kernel0(void* __restrict__ B, void* __restrict__ A) { float4 _1; { float4 _2 = (( float4*)(( float*)A + int)blockIdx.x) * 4[0]; float4 _3 = make_float4(1.00e+00f, 1.00e+00f, 1.00e+00f, 1.00e+00f); _1.x = (_2.x+_3.x); _1.y = (_2.y+_3.y); _1.z = (_2.z+_3.z); _1.w = (_2.w+_3.w); } (( float4*)(( float*)B + int)blockIdx.x) * 4[0] = _1; } ``` After: ```c++ extern "C" __global__ void test_kernel0(void* __restrict__ B, void* __restrict__ A) { float4 _1; float4 _2 = ((float4*)((float*)A + int)blockIdx.x) * 4[0]; float4 _3 = make_float4(1.00e+00f, 1.00e+00f, 1.00e+00f, 1.00e+00f); _1.x = (_2.x+_3.x); _1.y = (_2.y+_3.y); _1.z = (_2.z+_3.z); _1.w = (_2.w+_3.w); ((float4*)((float*)B + int)blockIdx.x) * 4[0] = _1; } ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] icemelon9 commented on pull request #5319: [cuDNN] Add cuDNN grouped convolution support
icemelon9 commented on pull request #5319: URL: https://github.com/apache/incubator-tvm/pull/5319#issuecomment-618594262 Thanks @wpan11nv. This is now merged. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (a3b1397 -> 9c12ec8)
This is an automated email from the ASF dual-hosted git repository. haichen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from a3b1397 [Frontend] Asymmetric padding of convolution support (#4803) add 9c12ec8 [cuDNN] Add cuDNN grouped convolutions support (#5319) No new revisions were added by this update. Summary of changes: python/tvm/contrib/cudnn.py | 36 +--- python/tvm/relay/op/strategy/cuda.py| 20 - src/runtime/contrib/cudnn/conv_forward.cc | 37 +--- src/runtime/contrib/cudnn/cudnn_utils.h | 1 - tests/python/contrib/test_cudnn.py | 112 +--- topi/python/topi/cuda/conv2d.py | 9 +- topi/python/topi/testing/conv2d_nhwc_python.py | 37 +++- topi/python/topi/testing/conv3d_ncdhw_python.py | 1 + topi/tests/python/test_topi_conv2d_nchw.py | 2 +- 9 files changed, 170 insertions(+), 85 deletions(-)
[GitHub] [incubator-tvm] wpan11nv edited a comment on pull request #5424: [CodeGen] Cleanup generated code
wpan11nv edited a comment on pull request #5424: URL: https://github.com/apache/incubator-tvm/pull/5424#issuecomment-618591146 Notably extra white spaces or scopes are removed. When the loop nesting or expression is deep, the emitted becomes unreadable. Before: `extern "C" __global__ void test_kernel0(void* __restrict__ B, void* __restrict__ A) { float4 _1; { float4 _2 = (( float4*)(( float*)A + int)blockIdx.x) * 4[0]; float4 _3 = make_float4(1.00e+00f, 1.00e+00f, 1.00e+00f, 1.00e+00f); _1.x = (_2.x+_3.x); _1.y = (_2.y+_3.y); _1.z = (_2.z+_3.z); _1.w = (_2.w+_3.w); } (( float4*)(( float*)B + int)blockIdx.x) * 4[0] = _1; }` After: extern "C" __global__ void test_kernel0(void* __restrict__ B, void* __restrict__ A) { float4 _1; float4 _2 = ((float4*)((float*)A + int)blockIdx.x) * 4[0]; float4 _3 = make_float4(1.00e+00f, 1.00e+00f, 1.00e+00f, 1.00e+00f); _1.x = (_2.x+_3.x); _1.y = (_2.y+_3.y); _1.z = (_2.z+_3.z); _1.w = (_2.w+_3.w); ((float4*)((float*)B + int)blockIdx.x) * 4[0] = _1; } This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] wpan11nv edited a comment on pull request #5424: [CodeGen] Cleanup generated code
wpan11nv edited a comment on pull request #5424: URL: https://github.com/apache/incubator-tvm/pull/5424#issuecomment-618591146 Notably extra white spaces or scopes are removed. When the loop nesting or expression is deep, the emitted becomes unreadable. Before: `extern "C" __global__ void test_kernel0(void* __restrict__ B, void* __restrict__ A) { float4 _1; { float4 _2 = (( float4*)(( float*)A + int)blockIdx.x) * 4[0]; float4 _3 = make_float4(1.00e+00f, 1.00e+00f, 1.00e+00f, 1.00e+00f); _1.x = (_2.x+_3.x); _1.y = (_2.y+_3.y); _1.z = (_2.z+_3.z); _1.w = (_2.w+_3.w); } (( float4*)(( float*)B + int)blockIdx.x) * 4[0] = _1; }` After: ``extern "C" __global__ void test_kernel0(void* __restrict__ B, void* __restrict__ A) { float4 _1; float4 _2 = ((float4*)((float*)A + int)blockIdx.x) * 4[0]; float4 _3 = make_float4(1.00e+00f, 1.00e+00f, 1.00e+00f, 1.00e+00f); _1.x = (_2.x+_3.x); _1.y = (_2.y+_3.y); _1.z = (_2.z+_3.z); _1.w = (_2.w+_3.w); ((float4*)((float*)B + int)blockIdx.x) * 4[0] = _1; } This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] wpan11nv edited a comment on pull request #5424: [CodeGen] Cleanup generated code
wpan11nv edited a comment on pull request #5424: URL: https://github.com/apache/incubator-tvm/pull/5424#issuecomment-618591146 Notably extra white spaces or scopes are removed. When the loop nesting or expression is deep, the emitted becomes unreadable. Before: `extern "C" __global__ void test_kernel0(void* __restrict__ B, void* __restrict__ A) { float4 _1; { float4 _2 = (( float4*)(( float*)A + int)blockIdx.x) * 4[0]; float4 _3 = make_float4(1.00e+00f, 1.00e+00f, 1.00e+00f, 1.00e+00f); _1.x = (_2.x+_3.x); _1.y = (_2.y+_3.y); _1.z = (_2.z+_3.z); _1.w = (_2.w+_3.w); } (( float4*)(( float*)B + int)blockIdx.x) * 4[0] = _1; }` After: ```extern "C" __global__ void test_kernel0(void* __restrict__ B, void* __restrict__ A) { float4 _1; float4 _2 = ((float4*)((float*)A + int)blockIdx.x) * 4[0]; float4 _3 = make_float4(1.00e+00f, 1.00e+00f, 1.00e+00f, 1.00e+00f); _1.x = (_2.x+_3.x); _1.y = (_2.y+_3.y); _1.z = (_2.z+_3.z); _1.w = (_2.w+_3.w); ((float4*)((float*)B + int)blockIdx.x) * 4[0] = _1; }``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] wpan11nv commented on pull request #5424: [CodeGen] Cleanup generated code
wpan11nv commented on pull request #5424: URL: https://github.com/apache/incubator-tvm/pull/5424#issuecomment-618591146 Notably extra white spaces or scopes are removed. When the loop nesting or expression is deep, the emitted becomes unreadable. Before: `extern "C" __global__ void test_kernel0(void* __restrict__ B, void* __restrict__ A) { float4 _1; { float4 _2 = (( float4*)(( float*)A + int)blockIdx.x) * 4[0]; float4 _3 = make_float4(1.00e+00f, 1.00e+00f, 1.00e+00f, 1.00e+00f); _1.x = (_2.x+_3.x); _1.y = (_2.y+_3.y); _1.z = (_2.z+_3.z); _1.w = (_2.w+_3.w); } (( float4*)(( float*)B + int)blockIdx.x) * 4[0] = _1; }` After: `extern "C" __global__ void test_kernel0(void* __restrict__ B, void* __restrict__ A) { float4 _1; float4 _2 = ((float4*)((float*)A + int)blockIdx.x) * 4[0]; float4 _3 = make_float4(1.00e+00f, 1.00e+00f, 1.00e+00f, 1.00e+00f); _1.x = (_2.x+_3.x); _1.y = (_2.y+_3.y); _1.z = (_2.z+_3.z); _1.w = (_2.w+_3.w); ((float4*)((float*)B + int)blockIdx.x) * 4[0] = _1; }` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] u99127 commented on a change in pull request #5421: [RFC] Pytest environment improvements
u99127 commented on a change in pull request #5421: URL: https://github.com/apache/incubator-tvm/pull/5421#discussion_r414036151 ## File path: docs/contribute/pull_request.rst ## @@ -118,3 +118,6 @@ If you want to run a single test: rm -rf python/tvm/*.pyc python/tvm/*/*.pyc python/tvm/*/*/*.pyc TVM_FFI=ctypes python -m pytest -v tests/python/unittest/test_pass_storage_rewrite.py + + #Additionally if you want to run a single test, for example test_all_elemwise inside a file. Review comment: Ah, thanks - fixed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] u99127 commented on pull request #5392: [CI] Migrate Tensorflow and Tensorflow lite in CI to 2.1.0
u99127 commented on pull request #5392: URL: https://github.com/apache/incubator-tvm/pull/5392#issuecomment-618582877 Gentle ping . This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] wpan11nv opened a new pull request #5424: [CodeGen] Cleanup generated code
wpan11nv opened a new pull request #5424: URL: https://github.com/apache/incubator-tvm/pull/5424 - remove unnecessary white spaces from storage kind - do not start a new scope for vectorization as temporary variables are alll uniquely generated. The above two changes make vectorized code much cleaner. Signed-off-by: Wei Pan Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them in the pull request thread. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] yongfeng-nv commented on a change in pull request #5367: Improve IntervalSet's floormod
yongfeng-nv commented on a change in pull request #5367: URL: https://github.com/apache/incubator-tvm/pull/5367#discussion_r414034656 ## File path: src/arith/const_int_bound.cc ## @@ -150,10 +150,12 @@ class ConstIntBoundAnalyzer::Impl : const PrimExprNode* op = expr.as(); auto val = bound_->find(op); if (val != bound_->end()) { -CHECK(val->second->min_value == res.min_value && - val->second->max_value == res.max_value) - << "Detected bound for " << expr - << "conflicts with memorization"; +auto everything = Everything(op->dtype); +CHECK( +(val->second->min_value == res.min_value && val->second->max_value == res.max_value) || +(val->second->min_value == everything.min_value && Review comment: @hzfan do you mean to update val->second, when res is a subset? It is a looser check than the current change. I haven't seen any case that a limit bound improves from a looser limit bound. If we don't have any such case now, I'd like to let this check assert for people to justify when they change the behavior. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass
lhutton1 commented on a change in pull request #5422: URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r414025113 ## File path: include/tvm/relay/op_attr_types.h ## @@ -158,13 +158,15 @@ using FTVMAlterOpLayout = runtime::TypedPackedFunc< * \param tinfos An array of placeholders, use for getting the inferred shape * and dtype of the inputs. * \param desired_layout The desired layout. + * \param additional_layouts Specify additional layouts, e.g. kernel_layout. * \return new_expr The modified expression. */ using FTVMConvertOpLayout = runtime::TypedPackedFunc< Expr(const Attrs& attrs, const Array& args, const Array& tinfos, - const std::string& desired_layout)>; + const std::string& desired_layout, Review comment: Sounds good to me :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] hcho3 commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py
hcho3 commented on issue #4953: URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-618560244 @trivialfis Here it is: https://drive.google.com/file/d/13WZRRaUPKil4rwH2avgUix_xO5LpIYs5/view?usp=sharing This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass
anijain2305 commented on a change in pull request #5422: URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r414011046 ## File path: include/tvm/relay/op_attr_types.h ## @@ -158,13 +158,15 @@ using FTVMAlterOpLayout = runtime::TypedPackedFunc< * \param tinfos An array of placeholders, use for getting the inferred shape * and dtype of the inputs. * \param desired_layout The desired layout. + * \param additional_layouts Specify additional layouts, e.g. kernel_layout. * \return new_expr The modified expression. */ using FTVMConvertOpLayout = runtime::TypedPackedFunc< Expr(const Attrs& attrs, const Array& args, const Array& tinfos, - const std::string& desired_layout)>; + const std::string& desired_layout, Review comment: You are correct in understanding my proposal. Thats interesting. Although, we don't any operator with three different layouts. For this, maybe, we can use a string "default" (or maybe a better name), that leaves the layout for the tensor in that index unchanged. Therefore, we can be stricter while calling the ConvertLayout pass. The user must define the layouts for all the tensors. If he/she does not care about a certain tensor layout, even then they must say "default". This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] junrushao1994 commented on issue #5423: [RUNTIME][OBJECT] Introduce static slots for common objects.
junrushao1994 commented on issue #5423: URL: https://github.com/apache/incubator-tvm/pull/5423#issuecomment-618551683 I am a bit concerned about the current strategy that we allocate the exact number of children slots: A third party library may inherit some of those objects, which makes their #children increase. Shall we allocate a bit more, like align to 2^k, to make 3rd party inheriting less troublesome? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] hzfan commented on a change in pull request #5367: Improve IntervalSet's floormod
hzfan commented on a change in pull request #5367: URL: https://github.com/apache/incubator-tvm/pull/5367#discussion_r414004382 ## File path: src/arith/const_int_bound.cc ## @@ -150,10 +150,12 @@ class ConstIntBoundAnalyzer::Impl : const PrimExprNode* op = expr.as(); auto val = bound_->find(op); if (val != bound_->end()) { -CHECK(val->second->min_value == res.min_value && - val->second->max_value == res.max_value) - << "Detected bound for " << expr - << "conflicts with memorization"; +auto everything = Everything(op->dtype); +CHECK( +(val->second->min_value == res.min_value && val->second->max_value == res.max_value) || +(val->second->min_value == everything.min_value && Review comment: I agree that override with partial order is good. Can we check if `res` is contained in `val->second` ? (`val->second` being everything is a special case of this) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass
lhutton1 commented on a change in pull request #5422: URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r413999028 ## File path: include/tvm/relay/op_attr_types.h ## @@ -158,13 +158,15 @@ using FTVMAlterOpLayout = runtime::TypedPackedFunc< * \param tinfos An array of placeholders, use for getting the inferred shape * and dtype of the inputs. * \param desired_layout The desired layout. + * \param additional_layouts Specify additional layouts, e.g. kernel_layout. * \return new_expr The modified expression. */ using FTVMConvertOpLayout = runtime::TypedPackedFunc< Expr(const Attrs& attrs, const Array& args, const Array& tinfos, - const std::string& desired_layout)>; + const std::string& desired_layout, Review comment: Thanks, I hope I understood correctly. So the idea is that someone using this pass would specify a map of operator -> [layouts] e.g. `relay.transform.ConvertLayout({"nn.conv2d": ["NCHW", "IOHW"], ...})`? My intention for doing it the additional_layouts way was to minimise the impact on the current state of the pass, although I like your idea. My only concern with using a list would be an operator that had say 3 different input tensors with different layouts (I don't think one exists currently, could this pop up in the future?). For example data, kernel, some_other. How could we specify a preferred layout for data and some_other whilst leaving kernel set to default? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] comaniac edited a comment on issue #5409: [BYOC] Don't annotate constants
comaniac edited a comment on issue #5409: URL: https://github.com/apache/incubator-tvm/pull/5409#issuecomment-618537434 Ah I see your point. I missed considering that `ConstantNode` is also a "node" instead of a "var". I agree with you that constant nodes should follow the target of the consuming node and the output you illustrated makes sense to me. On the other hand, it'd be better to maintain the statement of "annotating every node" of `AnnotateTarget` pass in my opinion. A similar case is `TupleNode`. We determine the target of tuple nodes by looking at its arguments, and still annotate it whatever it should be at the TVM or other targets. In summary, I'd prefer the following annotation generated by `AnnotateTarget`: ``` input | begin | op -- begin -- end -- const -- begin | end ``` Although just like you pointed out that constant node is not in dataflow path, I still prefer to simplify the process in a pass. This can not only naturally guarantee that this change will not affect the rest BYOC flow, but also make the future maintenance easier. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5421: [RFC] Pytest environment improvements
tqchen commented on a change in pull request #5421: URL: https://github.com/apache/incubator-tvm/pull/5421#discussion_r413991965 ## File path: docs/contribute/pull_request.rst ## @@ -118,3 +118,6 @@ If you want to run a single test: rm -rf python/tvm/*.pyc python/tvm/*/*.pyc python/tvm/*/*/*.pyc TVM_FFI=ctypes python -m pytest -v tests/python/unittest/test_pass_storage_rewrite.py + + #Additionally if you want to run a single test, for example test_all_elemwise inside a file. Review comment: ` # Additionally if` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5421: [RFC] Pytest environment improvements
tqchen commented on a change in pull request #5421: URL: https://github.com/apache/incubator-tvm/pull/5421#discussion_r413990735 ## File path: docs/contribute/pull_request.rst ## @@ -118,3 +118,6 @@ If you want to run a single test: rm -rf python/tvm/*.pyc python/tvm/*/*.pyc python/tvm/*/*/*.pyc TVM_FFI=ctypes python -m pytest -v tests/python/unittest/test_pass_storage_rewrite.py + + #Additionally if you want to run a single test, for example test_all_elemwise inside a file. Review comment: nit : space between `#` and `Additionally` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5421: [RFC] Pytest environment improvements
tqchen commented on a change in pull request #5421: URL: https://github.com/apache/incubator-tvm/pull/5421#discussion_r413990735 ## File path: docs/contribute/pull_request.rst ## @@ -118,3 +118,6 @@ If you want to run a single test: rm -rf python/tvm/*.pyc python/tvm/*/*.pyc python/tvm/*/*/*.pyc TVM_FFI=ctypes python -m pytest -v tests/python/unittest/test_pass_storage_rewrite.py + + #Additionally if you want to run a single test, for example test_all_elemwise inside a file. Review comment: not : space between `#` and `Additionally` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] comaniac commented on issue #5409: [BYOC] Don't annotate constants
comaniac commented on issue #5409: URL: https://github.com/apache/incubator-tvm/pull/5409#issuecomment-618537434 Ah I see your point. I missed considering that `ConstantNode` is also a "node" instead of a "var". I agree with you that constant nodes should follow the target of the consuming node and the output you illustrated makes sense to me. On the other hand, it'd be better to maintain the statement of "annotating every node" of `AnnotateTarget` pass. A similar case is `TupleNode`. We determine the target of tuple nodes by looking at its arguments, and still annotate it whatever it should be at the TVM or other targets. In summary, I'd prefer the following annotation generated by `AnnotateTarget`: ``` input | begin | op -- begin -- end -- const -- begin | end ``` instead of ``` input | begin | op -- const | end ``` This can naturally guarantee that this change will not affect the rest BYOC flow. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org