[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5578: [Relay][Refactor][std::string --> String] Relay updated with String
zhiics commented on a change in pull request #5578: URL: https://github.com/apache/incubator-tvm/pull/5578#discussion_r424893721 ## File path: src/relay/backend/compile_engine.cc ## @@ -580,7 +580,7 @@ class CompileEngineImpl : public CompileEngineNode { auto symbol_name = src_func->GetAttr(tvm::attr::kGlobalSymbol); CHECK(symbol_name.defined()) << "No external symbol is set for:\n" << AsText(src_func, false); -auto gv = GlobalVar(std::string(symbol_name.value())); +auto gv = GlobalVar(String(symbol_name.value())); Review comment: just `GlobalVar(symbol_name)`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #5578: [Relay][Refactor][std::string --> String] Relay updated with String
ANSHUMAN87 commented on pull request #5578: URL: https://github.com/apache/incubator-tvm/pull/5578#issuecomment-628401088 Gentle Ping @jroesch , @tqchen , @zhiics , Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] ANSHUMAN87 commented on a change in pull request #5588: [Frontend][Tensorflow] Gather nd bug fix for one dim support in tensorflow
ANSHUMAN87 commented on a change in pull request #5588: URL: https://github.com/apache/incubator-tvm/pull/5588#discussion_r424883702 ## File path: python/tvm/relay/frontend/tensorflow.py ## @@ -1378,9 +1378,11 @@ def _gather_nd(): def _impl(inputs, attr, params, mod): indices_dims = len(_infer_shape(inputs[1], mod)) indices = _op.transpose(inputs[1], axes=[-1] + list(range(indices_dims-1))) +attr_new = {} +attr_new['one_dim_support'] = True Review comment: @kevinthesun : Sorry for late reply! Actually the difficulty here is Gather_ND has different behavior in MXNet & Tensorflow. In MXNet it supports only Dim >= 2, but in Tensorflow it supports Dim >= 1. So if we just check Input dims, it may not help the Op works in both the case. That is the reason i added additional attribute to control it. But we can do it another way as well, like adding check in MXNet frontend minimum 2-dim, and modify check in Op code to support minimum 1-dim. In that way also we can solve for both. We can discuss on it, which one will be better. Please let me know your valuable opinion, Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] kevinthesun commented on pull request #5467: [Relay]Improve Shape Func handling for Tuple inputs
kevinthesun commented on pull request #5467: URL: https://github.com/apache/incubator-tvm/pull/5467#issuecomment-628383515 ping @jroesch This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] roastduck commented on a change in pull request #5551: [Reduction] Fix cross thread reduction
roastduck commented on a change in pull request #5551: URL: https://github.com/apache/incubator-tvm/pull/5551#discussion_r424865522 ## File path: src/te/operation/cross_thread_reduction.cc ## @@ -48,9 +97,18 @@ Stmt MakeCrossThreadReduction(const ComputeOpNode* self, const Stage& stage, CHECK(reduce); reduces[i] = reduce; } - PrimExpr cond = reduces[0]->condition; - for (PrimExpr v : conds) { -cond = cond && v; + + // This computes the bound checking predicates in normal reduction. + auto normal_preds = + MakeBoundCheck(stage, dom_map, value_map, false, std::unordered_set()); + + // The existing reduction predicate (only from the first one one?) + PrimExpr input_pred = reduces[0]->condition; + + // normal_pred = input_pred && normal_pred + normal_preds.push_back(input_pred); + for (PrimExpr v : normal_preds) { +if (v.defined()) normal_preds.push_back(v); } Review comment: What does this loop do? Iterating through `normal_preds` and push into itself? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner
masahi commented on a change in pull request #5231: URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r424847028 ## File path: include/tvm/relay/dataflow_matcher.h ## @@ -0,0 +1,67 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/*! + * \file tvm/relay/dataflow_matcher.h + * \brief A pattern matcher for matching dataflow properties. + */ +#ifndef TVM_RELAY_DATAFLOW_MATCHER_H_ +#define TVM_RELAY_DATAFLOW_MATCHER_H_ + +#include +#include + +#include +#include + +namespace tvm { +namespace relay { + +class DFPatternCallback; Review comment: I think you can remove this forward decl. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner
masahi commented on a change in pull request #5231: URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r424846955 ## File path: include/tvm/relay/dataflow_matcher.h ## @@ -0,0 +1,67 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/*! + * \file tvm/relay/dataflow_matcher.h + * \brief A pattern matcher for matching dataflow properties. + */ +#ifndef TVM_RELAY_DATAFLOW_MATCHER_H_ +#define TVM_RELAY_DATAFLOW_MATCHER_H_ + +#include +#include + +#include +#include + +namespace tvm { +namespace relay { + +class DFPatternCallback; +/*! + * \brief Base type of all dataflow pattern callbacks. + * \sa DFPatternCallback + */ +class DFPatternCallbackNode : public Object { + public: + /*! \brief Pattern this callback matches */ + DFPattern pattern_; + /*! \brief Function to call when finding a matched expression */ + PackedFunc function_; + + void VisitAttrs(tvm::AttrVisitor* v) {} + + static constexpr const char* _type_key = "DFPatternCallbackNode"; + TVM_DECLARE_BASE_OBJECT_INFO(DFPatternCallbackNode, Object); +}; + +/*! + * \brief Managed reference to dataflow pattern callbacks. + * \sa DFPatternCallbackNode + */ +class DFPatternCallback : public ObjectRef { Review comment: Since this header is fairly small and used only by dataflow_matcher.cc, how about moving the content to dataflow_matcher.cc and remove this header? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner
masahi commented on a change in pull request #5231: URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r424846955 ## File path: include/tvm/relay/dataflow_matcher.h ## @@ -0,0 +1,67 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/*! + * \file tvm/relay/dataflow_matcher.h + * \brief A pattern matcher for matching dataflow properties. + */ +#ifndef TVM_RELAY_DATAFLOW_MATCHER_H_ +#define TVM_RELAY_DATAFLOW_MATCHER_H_ + +#include +#include + +#include +#include + +namespace tvm { +namespace relay { + +class DFPatternCallback; +/*! + * \brief Base type of all dataflow pattern callbacks. + * \sa DFPatternCallback + */ +class DFPatternCallbackNode : public Object { + public: + /*! \brief Pattern this callback matches */ + DFPattern pattern_; + /*! \brief Function to call when finding a matched expression */ + PackedFunc function_; + + void VisitAttrs(tvm::AttrVisitor* v) {} + + static constexpr const char* _type_key = "DFPatternCallbackNode"; + TVM_DECLARE_BASE_OBJECT_INFO(DFPatternCallbackNode, Object); +}; + +/*! + * \brief Managed reference to dataflow pattern callbacks. + * \sa DFPatternCallbackNode + */ +class DFPatternCallback : public ObjectRef { Review comment: Since this header is fairly small and used only by dataflow_matcher.cc, how about moving the content to dataflow_matcher.cc and remove this header?? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner
masahi commented on a change in pull request #5231: URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r424847028 ## File path: include/tvm/relay/dataflow_matcher.h ## @@ -0,0 +1,67 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/*! + * \file tvm/relay/dataflow_matcher.h + * \brief A pattern matcher for matching dataflow properties. + */ +#ifndef TVM_RELAY_DATAFLOW_MATCHER_H_ +#define TVM_RELAY_DATAFLOW_MATCHER_H_ + +#include +#include + +#include +#include + +namespace tvm { +namespace relay { + +class DFPatternCallback; Review comment: I think you can remove this forward decl. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner
masahi commented on a change in pull request #5231: URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r424846955 ## File path: include/tvm/relay/dataflow_matcher.h ## @@ -0,0 +1,67 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/*! + * \file tvm/relay/dataflow_matcher.h + * \brief A pattern matcher for matching dataflow properties. + */ +#ifndef TVM_RELAY_DATAFLOW_MATCHER_H_ +#define TVM_RELAY_DATAFLOW_MATCHER_H_ + +#include +#include + +#include +#include + +namespace tvm { +namespace relay { + +class DFPatternCallback; +/*! + * \brief Base type of all dataflow pattern callbacks. + * \sa DFPatternCallback + */ +class DFPatternCallbackNode : public Object { + public: + /*! \brief Pattern this callback matches */ + DFPattern pattern_; + /*! \brief Function to call when finding a matched expression */ + PackedFunc function_; + + void VisitAttrs(tvm::AttrVisitor* v) {} + + static constexpr const char* _type_key = "DFPatternCallbackNode"; + TVM_DECLARE_BASE_OBJECT_INFO(DFPatternCallbackNode, Object); +}; + +/*! + * \brief Managed reference to dataflow pattern callbacks. + * \sa DFPatternCallbackNode + */ +class DFPatternCallback : public ObjectRef { Review comment: Since this header is fairly small and used only by dataflow_matcher.cc, how about moving the content to dataflow_matcher.cc? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner
masahi commented on a change in pull request #5231: URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r424844266 ## File path: include/tvm/relay/dataflow_pattern_functor.h ## @@ -0,0 +1,146 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/*! + * \file tvm/relay/dataflow_matcher.h Review comment: dataflow_pattern_functor.h This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner
masahi commented on a change in pull request #5231: URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r424840667 ## File path: src/relay/ir/dataflow_matcher.cc ## @@ -0,0 +1,656 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/*! + * \file src/tvm/relay/dataflow_matcher.cc + * \brief The dataflow pattern matcher for Relay. + */ + +#include +#include +#include +#include + +#include + +#include "indexed_graph.h" + +namespace tvm { +namespace relay { + +// Pattern Matcher + +class DominatorMatcher; + +class DFPatternMatcher : public DFPatternFunctor { + public: + explicit DFPatternMatcher(const Expr& root_expr) : expr_graph_(CreateIndexedGraph(root_expr)) {} + bool Match(const DFPattern& pattern, const Expr& expr); + Map> GetMemo() { return Map>(memo_); } + + protected: + bool VisitDFPattern(const DFPattern& pattern, const Expr& expr) override; + bool VisitDFPattern_(const AltPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const AttrPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const CallPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const DominatorPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const ExprPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const TupleGetItemPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const TuplePatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const TypePatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const VarPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const WildcardPatternNode* op, const Expr& expr) override; + + void ClearMap(size_t watermark); + bool MatchesPath(const DominatorPatternNode* op, const Expr& expr); + bool DominatesParent(const DominatorPatternNode* op, const Expr& expr); + + std::unordered_map, ObjectHash, ObjectEqual> memo_; + std::vector matched_nodes_; + IndexedGraph expr_graph_; + IndexedGraph pattern_graph_; + bool memoize_ = true; +}; + +bool DFPatternMatcher::Match(const DFPattern& pattern, const Expr& expr) { + memo_.clear(); + matched_nodes_.clear(); + return VisitDFPattern(pattern, expr); +} + +void DFPatternMatcher::ClearMap(size_t watermark) { + for (size_t i = watermark; i < matched_nodes_.size(); ++i) { +memo_.erase(matched_nodes_[i]); + } + matched_nodes_.erase(matched_nodes_.begin() + watermark, matched_nodes_.end()); +} + +bool DFPatternMatcher::VisitDFPattern(const DFPattern& pattern, const Expr& expr) { + if (memoize_ && memo_.count(pattern)) { +CHECK_EQ(memo_[pattern].size(), 1); +return expr.same_as(memo_[pattern][0]); + } else { +auto watermark = matched_nodes_.size(); +auto out = DFPatternFunctor::VisitDFPattern(pattern, expr); +if (out) { + memo_[pattern].push_back(expr); + matched_nodes_.push_back(pattern); +} else { + ClearMap(watermark); +} +return out; + } +} + +bool DFPatternMatcher::VisitDFPattern_(const AltPatternNode* op, const Expr& expr) { + return VisitDFPattern(op->left, expr) || VisitDFPattern(op->right, expr); +} + +bool DFPatternMatcher::VisitDFPattern_(const AttrPatternNode* attr_pattern, const Expr& expr) { + bool matches = false; + if (const auto* op_node = expr.as()) { +Op op = GetRef(op_node); +auto attributes = attr_pattern->attrs.as()->dict; +for (auto kv : attributes) { + auto attr_name = kv.first; + auto attr_value = kv.second; + auto op_map = Op::GetAttr(attr_name); + if (op_map.count(op)) { +switch (op_map[op].type_code()) { + case kDLInt: +if (auto* val = kv.second.as()) { + matches = val->value == op_map[op].operator int64_t(); +} +break; + case kDLFloat: +if (auto* val = kv.second.as()) { + matches = val->value == op_map[op].operator double(); +} +break; + case kTVMStr: +if (auto* val = kv.second.as()) { + matches = val->value == op_map[op].operator std::string(); +} +break; + default: +CHECK(f
[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner
masahi commented on a change in pull request #5231: URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r424840667 ## File path: src/relay/ir/dataflow_matcher.cc ## @@ -0,0 +1,656 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/*! + * \file src/tvm/relay/dataflow_matcher.cc + * \brief The dataflow pattern matcher for Relay. + */ + +#include +#include +#include +#include + +#include + +#include "indexed_graph.h" + +namespace tvm { +namespace relay { + +// Pattern Matcher + +class DominatorMatcher; + +class DFPatternMatcher : public DFPatternFunctor { + public: + explicit DFPatternMatcher(const Expr& root_expr) : expr_graph_(CreateIndexedGraph(root_expr)) {} + bool Match(const DFPattern& pattern, const Expr& expr); + Map> GetMemo() { return Map>(memo_); } + + protected: + bool VisitDFPattern(const DFPattern& pattern, const Expr& expr) override; + bool VisitDFPattern_(const AltPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const AttrPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const CallPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const DominatorPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const ExprPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const TupleGetItemPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const TuplePatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const TypePatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const VarPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const WildcardPatternNode* op, const Expr& expr) override; + + void ClearMap(size_t watermark); + bool MatchesPath(const DominatorPatternNode* op, const Expr& expr); + bool DominatesParent(const DominatorPatternNode* op, const Expr& expr); + + std::unordered_map, ObjectHash, ObjectEqual> memo_; + std::vector matched_nodes_; + IndexedGraph expr_graph_; + IndexedGraph pattern_graph_; + bool memoize_ = true; +}; + +bool DFPatternMatcher::Match(const DFPattern& pattern, const Expr& expr) { + memo_.clear(); + matched_nodes_.clear(); + return VisitDFPattern(pattern, expr); +} + +void DFPatternMatcher::ClearMap(size_t watermark) { + for (size_t i = watermark; i < matched_nodes_.size(); ++i) { +memo_.erase(matched_nodes_[i]); + } + matched_nodes_.erase(matched_nodes_.begin() + watermark, matched_nodes_.end()); +} + +bool DFPatternMatcher::VisitDFPattern(const DFPattern& pattern, const Expr& expr) { + if (memoize_ && memo_.count(pattern)) { +CHECK_EQ(memo_[pattern].size(), 1); +return expr.same_as(memo_[pattern][0]); + } else { +auto watermark = matched_nodes_.size(); +auto out = DFPatternFunctor::VisitDFPattern(pattern, expr); +if (out) { + memo_[pattern].push_back(expr); + matched_nodes_.push_back(pattern); +} else { + ClearMap(watermark); +} +return out; + } +} + +bool DFPatternMatcher::VisitDFPattern_(const AltPatternNode* op, const Expr& expr) { + return VisitDFPattern(op->left, expr) || VisitDFPattern(op->right, expr); +} + +bool DFPatternMatcher::VisitDFPattern_(const AttrPatternNode* attr_pattern, const Expr& expr) { + bool matches = false; + if (const auto* op_node = expr.as()) { +Op op = GetRef(op_node); +auto attributes = attr_pattern->attrs.as()->dict; +for (auto kv : attributes) { + auto attr_name = kv.first; + auto attr_value = kv.second; + auto op_map = Op::GetAttr(attr_name); + if (op_map.count(op)) { +switch (op_map[op].type_code()) { + case kDLInt: +if (auto* val = kv.second.as()) { + matches = val->value == op_map[op].operator int64_t(); +} +break; + case kDLFloat: +if (auto* val = kv.second.as()) { + matches = val->value == op_map[op].operator double(); +} +break; + case kTVMStr: +if (auto* val = kv.second.as()) { + matches = val->value == op_map[op].operator std::string(); +} +break; + default: +CHECK(f
[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner
masahi commented on a change in pull request #5231: URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r424838458 ## File path: src/relay/ir/dataflow_matcher.cc ## @@ -0,0 +1,656 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/*! + * \file src/tvm/relay/dataflow_matcher.cc + * \brief The dataflow pattern matcher for Relay. + */ + +#include +#include +#include +#include + +#include + +#include "indexed_graph.h" + +namespace tvm { +namespace relay { + +// Pattern Matcher + +class DominatorMatcher; + +class DFPatternMatcher : public DFPatternFunctor { + public: + explicit DFPatternMatcher(const Expr& root_expr) : expr_graph_(CreateIndexedGraph(root_expr)) {} + bool Match(const DFPattern& pattern, const Expr& expr); + Map> GetMemo() { return Map>(memo_); } + + protected: + bool VisitDFPattern(const DFPattern& pattern, const Expr& expr) override; + bool VisitDFPattern_(const AltPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const AttrPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const CallPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const DominatorPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const ExprPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const TupleGetItemPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const TuplePatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const TypePatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const VarPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const WildcardPatternNode* op, const Expr& expr) override; + + void ClearMap(size_t watermark); + bool MatchesPath(const DominatorPatternNode* op, const Expr& expr); + bool DominatesParent(const DominatorPatternNode* op, const Expr& expr); + + std::unordered_map, ObjectHash, ObjectEqual> memo_; + std::vector matched_nodes_; + IndexedGraph expr_graph_; + IndexedGraph pattern_graph_; + bool memoize_ = true; +}; + +bool DFPatternMatcher::Match(const DFPattern& pattern, const Expr& expr) { + memo_.clear(); + matched_nodes_.clear(); + return VisitDFPattern(pattern, expr); +} + +void DFPatternMatcher::ClearMap(size_t watermark) { + for (size_t i = watermark; i < matched_nodes_.size(); ++i) { +memo_.erase(matched_nodes_[i]); + } + matched_nodes_.erase(matched_nodes_.begin() + watermark, matched_nodes_.end()); +} + +bool DFPatternMatcher::VisitDFPattern(const DFPattern& pattern, const Expr& expr) { + if (memoize_ && memo_.count(pattern)) { +CHECK_EQ(memo_[pattern].size(), 1); +return expr.same_as(memo_[pattern][0]); + } else { +auto watermark = matched_nodes_.size(); +auto out = DFPatternFunctor::VisitDFPattern(pattern, expr); +if (out) { + memo_[pattern].push_back(expr); + matched_nodes_.push_back(pattern); +} else { + ClearMap(watermark); +} +return out; + } +} + +bool DFPatternMatcher::VisitDFPattern_(const AltPatternNode* op, const Expr& expr) { + return VisitDFPattern(op->left, expr) || VisitDFPattern(op->right, expr); +} + +bool DFPatternMatcher::VisitDFPattern_(const AttrPatternNode* attr_pattern, const Expr& expr) { + bool matches = false; + if (const auto* op_node = expr.as()) { +Op op = GetRef(op_node); +auto attributes = attr_pattern->attrs.as()->dict; +for (auto kv : attributes) { + auto attr_name = kv.first; + auto attr_value = kv.second; + auto op_map = Op::GetAttr(attr_name); + if (op_map.count(op)) { +switch (op_map[op].type_code()) { + case kDLInt: +if (auto* val = kv.second.as()) { + matches = val->value == op_map[op].operator int64_t(); +} +break; + case kDLFloat: +if (auto* val = kv.second.as()) { + matches = val->value == op_map[op].operator double(); +} +break; + case kTVMStr: +if (auto* val = kv.second.as()) { + matches = val->value == op_map[op].operator std::string(); +} +break; + default: +CHECK(f
[GitHub] [incubator-tvm] maheshambule commented on pull request #5052: Relay to ONNX and ONNX codegen
maheshambule commented on pull request #5052: URL: https://github.com/apache/incubator-tvm/pull/5052#issuecomment-628341773 @tqchen, @yongwww, @zhiics, @kevinthesun, @alexwong, based on the discussion on [RFC](https://discuss.tvm.ai/t/rfc-relay-to-onnx/6101), the PR is updated. Please help in the review. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] maheshambule edited a comment on pull request #5052: Relay to ONNX and ONNX codegen
maheshambule edited a comment on pull request #5052: URL: https://github.com/apache/incubator-tvm/pull/5052#issuecomment-628341773 @tqchen, @yongwww, @zhiics, @kevinthesun, @alexwong, based on the discussion on [RFC](https://discuss.tvm.ai/t/rfc-relay-to-onnx/6101), the PR is updated. Please help with the review. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner
masahi commented on a change in pull request #5231: URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r424827496 ## File path: src/relay/ir/dataflow_matcher.cc ## @@ -0,0 +1,656 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/*! + * \file src/tvm/relay/dataflow_matcher.cc + * \brief The dataflow pattern matcher for Relay. + */ + +#include +#include +#include +#include + +#include + +#include "indexed_graph.h" + +namespace tvm { +namespace relay { + +// Pattern Matcher + +class DominatorMatcher; + +class DFPatternMatcher : public DFPatternFunctor { + public: + explicit DFPatternMatcher(const Expr& root_expr) : expr_graph_(CreateIndexedGraph(root_expr)) {} + bool Match(const DFPattern& pattern, const Expr& expr); + Map> GetMemo() { return Map>(memo_); } + + protected: + bool VisitDFPattern(const DFPattern& pattern, const Expr& expr) override; + bool VisitDFPattern_(const AltPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const AttrPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const CallPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const DominatorPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const ExprPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const TupleGetItemPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const TuplePatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const TypePatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const VarPatternNode* op, const Expr& expr) override; + bool VisitDFPattern_(const WildcardPatternNode* op, const Expr& expr) override; + + void ClearMap(size_t watermark); + bool MatchesPath(const DominatorPatternNode* op, const Expr& expr); + bool DominatesParent(const DominatorPatternNode* op, const Expr& expr); + + std::unordered_map, ObjectHash, ObjectEqual> memo_; + std::vector matched_nodes_; + IndexedGraph expr_graph_; + IndexedGraph pattern_graph_; + bool memoize_ = true; +}; + +bool DFPatternMatcher::Match(const DFPattern& pattern, const Expr& expr) { + memo_.clear(); + matched_nodes_.clear(); + return VisitDFPattern(pattern, expr); +} + +void DFPatternMatcher::ClearMap(size_t watermark) { + for (size_t i = watermark; i < matched_nodes_.size(); ++i) { +memo_.erase(matched_nodes_[i]); + } + matched_nodes_.erase(matched_nodes_.begin() + watermark, matched_nodes_.end()); +} + +bool DFPatternMatcher::VisitDFPattern(const DFPattern& pattern, const Expr& expr) { + if (memoize_ && memo_.count(pattern)) { +CHECK_EQ(memo_[pattern].size(), 1); +return expr.same_as(memo_[pattern][0]); + } else { +auto watermark = matched_nodes_.size(); +auto out = DFPatternFunctor::VisitDFPattern(pattern, expr); +if (out) { + memo_[pattern].push_back(expr); + matched_nodes_.push_back(pattern); +} else { + ClearMap(watermark); +} +return out; + } +} + +bool DFPatternMatcher::VisitDFPattern_(const AltPatternNode* op, const Expr& expr) { + return VisitDFPattern(op->left, expr) || VisitDFPattern(op->right, expr); +} + +bool DFPatternMatcher::VisitDFPattern_(const AttrPatternNode* attr_pattern, const Expr& expr) { + bool matches = false; + if (const auto* op_node = expr.as()) { +Op op = GetRef(op_node); +auto attributes = attr_pattern->attrs.as()->dict; +for (auto kv : attributes) { + auto attr_name = kv.first; + auto attr_value = kv.second; + auto op_map = Op::GetAttr(attr_name); + if (op_map.count(op)) { +switch (op_map[op].type_code()) { + case kDLInt: +if (auto* val = kv.second.as()) { + matches = val->value == op_map[op].operator int64_t(); +} +break; + case kDLFloat: +if (auto* val = kv.second.as()) { + matches = val->value == op_map[op].operator double(); +} +break; + case kTVMStr: +if (auto* val = kv.second.as()) { + matches = val->value == op_map[op].operator std::string(); +} +break; + default: +CHECK(f
[GitHub] [incubator-tvm] jroesch commented on a change in pull request #5144: [Relay][VM] Memory planner (part 1)
jroesch commented on a change in pull request #5144: URL: https://github.com/apache/incubator-tvm/pull/5144#discussion_r424823515 ## File path: python/tvm/relay/transform/memory_plan.py ## @@ -0,0 +1,353 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# pylint: disable=no-else-return,invalid-name,len-as-condition,too-many-nested-blocks +""" +A pass for manifesting explicit memory allocations. +""" +from typing import Optional, Dict, List, Tuple +from collections import defaultdict +import attr + +from ..expr_functor import ExprMutator +from .. import op, expr +from ..function import Function +from ... import register_func, ir, cpu +from ..._ffi.runtime_ctypes import TVMContext +from ... import IRModule +from .. import transform +from . import function_pass + + +def is_primitive(call): +return ( +hasattr(call, "op") +and hasattr(call.op, "attrs") +and hasattr(call.op.attrs, "Primitive") +and int(call.op.attrs.Primitive) == 1 +) + + +@attr.s(auto_attribs=True) +class Region: +""" +Represents a control-free allocation region. + +The below pass groups sets of allocations into regions, +then replaces the region with a single allocation. +""" +var: expr.Var +size: expr.Expr +alignment: Optional[expr.Expr] +dtype: Optional[str] +ctx: TVMContext +offsets: Dict[expr.Var, Tuple[expr.Expr, expr.Expr]] + +@staticmethod +def empty(region_no): +zero = expr.const(0, dtype="int64") +assert len(zero.data.shape) == 0 +region_var = expr.var(f"region{region_no}") +return Region(region_var, zero, None, None, None, {}) + +def grow( +self, old_storage: expr.Var, +size: expr.Expr, alignment: expr.Expr, +ctx: TVMContext, +dtype: str) -> None: +"""Grow the region by a given allocation as well as track the old storage + for later rewriting the program to use the allocated region. +""" +if self.dtype: +assert self.dtype == dtype, "must have matching dtypes in a region" +else: +self.dtype = dtype + +if self.alignment: +assert ir.structural_equal( +self.alignment, alignment +), "must have matching alignments in a region" +else: +self.alignment = alignment + +if self.ctx: +assert (self.ctx.device_type == ctx.device_type and +self.ctx.device_id == ctx.device_id), "must have matching context" +else: +assert ctx +self.ctx = ctx + +new_size = (size + self.alignment - expr.const(1, "int64")) \ +/ self.alignment * self.alignment + +# Record the offset at which we allocate the storage. +offset_var: expr.RelayExpr = expr.var(f"offset{len(self.offsets)}") +self.offsets[old_storage] = (offset_var, self.size) + +self.size = self.size + new_size + +def offset_for(self, alloc: expr.Expr) -> expr.Expr: +return self.offsets.get(alloc, [None])[0] + +def to_expr(self, body: expr.Expr) -> expr.Expr: +""" +Generate the prelude code for a region, wrapping the body in it. + +The prelude contains the single allocation for a region, and +all offset computations. +""" + +if self.ctx is None: +self.ctx = cpu(0) + +# Generate bindings for each and every size computation +# we must do this to maintain ANF. +bindings: List[Tuple[expr.Expr, expr.Expr]] = [] + +# First compute the total size. +total_size = expr.var(f"total_size{hash(body)}") +bindings.append((total_size, self.size)) + +# Allocate the entire region with a single call. +alloc = op.memory.alloc_storage(total_size, self.alignment, self.ctx, self.dtype) +bindings.append((self.var, alloc)) + +# Generate variables which contain all of the offset math. +# Ensure we constant evaluate away all the math here. +# +# In theory we can support dynamic offsets but this +# requires another round of memory planning and +# potentiall
[GitHub] [incubator-tvm] tqchen commented on pull request #5591: Fix JSON graph dumping.
tqchen commented on pull request #5591: URL: https://github.com/apache/incubator-tvm/pull/5591#issuecomment-628329888 Thanks @areusch ! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen merged pull request #5591: Fix JSON graph dumping.
tqchen merged pull request #5591: URL: https://github.com/apache/incubator-tvm/pull/5591 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated: Fix JSON graph dumping. (#5591)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git The following commit(s) were added to refs/heads/master by this push: new 482e341 Fix JSON graph dumping. (#5591) 482e341 is described below commit 482e34107054a08324b29d4078dfaecbe3c68430 Author: Andrew Reusch AuthorDate: Wed May 13 18:20:26 2020 -0700 Fix JSON graph dumping. (#5591) * Previously this function placed a JSON-escaped string containing the JSON-encoded graph. --- python/tvm/contrib/debugger/debug_result.py | 8 tests/python/unittest/test_runtime_graph_debug.py | 13 +++-- 2 files changed, 15 insertions(+), 6 deletions(-) diff --git a/python/tvm/contrib/debugger/debug_result.py b/python/tvm/contrib/debugger/debug_result.py index 18920c6..b1fe1b6 100644 --- a/python/tvm/contrib/debugger/debug_result.py +++ b/python/tvm/contrib/debugger/debug_result.py @@ -53,9 +53,9 @@ class DebugResult(object): self._dump_path = dump_path self._output_tensor_list = [] self._time_list = [] -self._parse_graph(graph_json) +json_obj = self._parse_graph(graph_json) # dump the json information -self.dump_graph_json(graph_json) +self._dump_graph_json(json_obj) def _parse_graph(self, graph_json): """Parse and extract the JSON graph and update the nodes, shapes and dltype. @@ -70,12 +70,12 @@ class DebugResult(object): self._shapes_list = json_obj['attrs']['shape'] self._dtype_list = json_obj['attrs']['dltype'] self._update_graph_json() +return json_obj def _update_graph_json(self): """update the nodes_list with name, shape and data type, for temporarily storing the output. """ - nodes_len = len(self._nodes_list) for i in range(nodes_len): node = self._nodes_list[i] @@ -192,7 +192,7 @@ class DebugResult(object): with open(os.path.join(self._dump_path, CHROME_TRACE_FILE_NAME), "w") as trace_f: json.dump(result, trace_f) -def dump_graph_json(self, graph): +def _dump_graph_json(self, graph): """Dump json formatted graph. Parameters diff --git a/tests/python/unittest/test_runtime_graph_debug.py b/tests/python/unittest/test_runtime_graph_debug.py index 658d9eb..ce47b16 100644 --- a/tests/python/unittest/test_runtime_graph_debug.py +++ b/tests/python/unittest/test_runtime_graph_debug.py @@ -14,11 +14,11 @@ # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. +import json import os import tvm from tvm import te import numpy as np -import json from tvm import rpc from tvm.contrib import util from tvm.contrib.debugger import debug_runtime as graph_runtime @@ -75,7 +75,16 @@ def test_graph_simple(): assert(len(os.listdir(directory)) == 1) #verify the file name is proper -assert(os.path.exists(os.path.join(directory, GRAPH_DUMP_FILE_NAME))) +graph_dump_path = os.path.join(directory, GRAPH_DUMP_FILE_NAME) +assert(os.path.exists(graph_dump_path)) + +# verify the graph contains some expected keys +with open(graph_dump_path) as graph_f: +dumped_graph = json.load(graph_f) + +assert isinstance(dumped_graph, dict) +for k in ("nodes", "arg_nodes", "node_row_ptr", "heads", "attrs"): +assert k in dumped_graph, f"key {k} not in dumped graph {graph!r}" mod.run() #Verify the tensors are dumped
[GitHub] [incubator-tvm] tqchen merged pull request #5589: [Hexagon] One more fix for concurrency count
tqchen merged pull request #5589: URL: https://github.com/apache/incubator-tvm/pull/5589 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated: [Hexagon] One more fix for concurrency count (#5589)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git The following commit(s) were added to refs/heads/master by this push: new dc9b557 [Hexagon] One more fix for concurrency count (#5589) dc9b557 is described below commit dc9b55768ef880ff9307bf2d3139bbf3cd2e2568 Author: Krzysztof Parzyszek AuthorDate: Wed May 13 19:16:14 2020 -0500 [Hexagon] One more fix for concurrency count (#5589) --- src/runtime/threading_backend.cc | 8 1 file changed, 8 insertions(+) diff --git a/src/runtime/threading_backend.cc b/src/runtime/threading_backend.cc index 2e781ea..e5520ef 100644 --- a/src/runtime/threading_backend.cc +++ b/src/runtime/threading_backend.cc @@ -34,6 +34,9 @@ #if defined(__linux__) #include #endif +#if defined(__hexagon__) +#include +#endif namespace tvm { namespace runtime { @@ -177,6 +180,11 @@ class ThreadGroup::Impl { void InitSortedOrder() { unsigned int threads = std::thread::hardware_concurrency(); +#if defined(__hexagon__) +// With unsigned PDs, getting the number of available hardware threads +// is not supported in earlier versions of QuRT. In such cases assume 4. +if (threads == 0) threads = 4; +#endif std::vector > max_freqs; for (unsigned int i = 0; i < threads; ++i) {
[GitHub] [incubator-tvm] areusch commented on pull request #5581: Add debug mode to tempdir()
areusch commented on pull request #5581: URL: https://github.com/apache/incubator-tvm/pull/5581#issuecomment-628311203 cc @tqchen This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5144: [Relay][VM] Memory planner (part 1)
zhiics commented on a change in pull request #5144: URL: https://github.com/apache/incubator-tvm/pull/5144#discussion_r424747097 ## File path: python/tvm/relay/transform/memory_plan.py ## @@ -0,0 +1,353 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# pylint: disable=no-else-return,invalid-name,len-as-condition,too-many-nested-blocks +""" +A pass for manifesting explicit memory allocations. +""" +from typing import Optional, Dict, List, Tuple +from collections import defaultdict +import attr + +from ..expr_functor import ExprMutator +from .. import op, expr +from ..function import Function +from ... import register_func, ir, cpu +from ..._ffi.runtime_ctypes import TVMContext +from ... import IRModule +from .. import transform +from . import function_pass + + +def is_primitive(call): +return ( +hasattr(call, "op") +and hasattr(call.op, "attrs") +and hasattr(call.op.attrs, "Primitive") +and int(call.op.attrs.Primitive) == 1 +) + + +@attr.s(auto_attribs=True) +class Region: +""" +Represents a control-free allocation region. + +The below pass groups sets of allocations into regions, +then replaces the region with a single allocation. +""" +var: expr.Var +size: expr.Expr +alignment: Optional[expr.Expr] +dtype: Optional[str] +ctx: TVMContext +offsets: Dict[expr.Var, Tuple[expr.Expr, expr.Expr]] + +@staticmethod +def empty(region_no): +zero = expr.const(0, dtype="int64") +assert len(zero.data.shape) == 0 +region_var = expr.var(f"region{region_no}") +return Region(region_var, zero, None, None, None, {}) + +def grow( +self, old_storage: expr.Var, +size: expr.Expr, alignment: expr.Expr, +ctx: TVMContext, +dtype: str) -> None: +"""Grow the region by a given allocation as well as track the old storage + for later rewriting the program to use the allocated region. +""" +if self.dtype: +assert self.dtype == dtype, "must have matching dtypes in a region" +else: +self.dtype = dtype + +if self.alignment: +assert ir.structural_equal( +self.alignment, alignment +), "must have matching alignments in a region" +else: +self.alignment = alignment + +if self.ctx: +assert (self.ctx.device_type == ctx.device_type and +self.ctx.device_id == ctx.device_id), "must have matching context" +else: +assert ctx +self.ctx = ctx + +new_size = (size + self.alignment - expr.const(1, "int64")) \ +/ self.alignment * self.alignment + +# Record the offset at which we allocate the storage. +offset_var: expr.RelayExpr = expr.var(f"offset{len(self.offsets)}") +self.offsets[old_storage] = (offset_var, self.size) + +self.size = self.size + new_size + +def offset_for(self, alloc: expr.Expr) -> expr.Expr: +return self.offsets.get(alloc, [None])[0] + +def to_expr(self, body: expr.Expr) -> expr.Expr: +""" +Generate the prelude code for a region, wrapping the body in it. + +The prelude contains the single allocation for a region, and +all offset computations. +""" + +if self.ctx is None: +self.ctx = cpu(0) + +# Generate bindings for each and every size computation +# we must do this to maintain ANF. +bindings: List[Tuple[expr.Expr, expr.Expr]] = [] + +# First compute the total size. +total_size = expr.var(f"total_size{hash(body)}") +bindings.append((total_size, self.size)) + +# Allocate the entire region with a single call. +alloc = op.memory.alloc_storage(total_size, self.alignment, self.ctx, self.dtype) +bindings.append((self.var, alloc)) + +# Generate variables which contain all of the offset math. +# Ensure we constant evaluate away all the math here. +# +# In theory we can support dynamic offsets but this +# requires another round of memory planning and +# potentially
[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5588: [Frontend][Tensorflow] Gather nd bug fix for one dim support in tensorflow
kevinthesun commented on a change in pull request #5588: URL: https://github.com/apache/incubator-tvm/pull/5588#discussion_r424747754 ## File path: python/tvm/relay/frontend/tensorflow.py ## @@ -1378,9 +1378,11 @@ def _gather_nd(): def _impl(inputs, attr, params, mod): indices_dims = len(_infer_shape(inputs[1], mod)) indices = _op.transpose(inputs[1], axes=[-1] + list(range(indices_dims-1))) +attr_new = {} +attr_new['one_dim_support'] = True Review comment: Looks like topi gather_nd supports 1 dim now. I'm wondering whether we really need a new attribute for this, or we can just change the checking in topi to be GE than 1 dim? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5588: [Frontend][Tensorflow] Gather nd bug fix for one dim support in tensorflow
kevinthesun commented on a change in pull request #5588: URL: https://github.com/apache/incubator-tvm/pull/5588#discussion_r424747754 ## File path: python/tvm/relay/frontend/tensorflow.py ## @@ -1378,9 +1378,11 @@ def _gather_nd(): def _impl(inputs, attr, params, mod): indices_dims = len(_infer_shape(inputs[1], mod)) indices = _op.transpose(inputs[1], axes=[-1] + list(range(indices_dims-1))) +attr_new = {} +attr_new['one_dim_support'] = True Review comment: Looks like topi gather_nd supports 1 dim now. I'm wondering whether we really need a new attribute for this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5588: [Frontend][Tensorflow] Gather nd bug fix for one dim support in tensorflow
kevinthesun commented on a change in pull request #5588: URL: https://github.com/apache/incubator-tvm/pull/5588#discussion_r424747754 ## File path: python/tvm/relay/frontend/tensorflow.py ## @@ -1378,9 +1378,11 @@ def _gather_nd(): def _impl(inputs, attr, params, mod): indices_dims = len(_infer_shape(inputs[1], mod)) indices = _op.transpose(inputs[1], axes=[-1] + list(range(indices_dims-1))) +attr_new = {} +attr_new['one_dim_support'] = True Review comment: Looks like ```one_dim_support``` is just for checking but doesn't change functionality. Can you explain a bit more about why we need to add an attribute for relay op? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5588: [Frontend][Tensorflow] Gather nd bug fix for one dim support in tensorflow
kevinthesun commented on a change in pull request #5588: URL: https://github.com/apache/incubator-tvm/pull/5588#discussion_r424747754 ## File path: python/tvm/relay/frontend/tensorflow.py ## @@ -1378,9 +1378,11 @@ def _gather_nd(): def _impl(inputs, attr, params, mod): indices_dims = len(_infer_shape(inputs[1], mod)) indices = _op.transpose(inputs[1], axes=[-1] + list(range(indices_dims-1))) +attr_new = {} +attr_new['one_dim_support'] = True Review comment: Looks like ```one_dim_support``` is just for checking but doesn't change functionality. Can you explain a bit more about why we need this? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] areusch opened a new pull request #5592: Add ostream formatters for TargetPtr/TargetVal.
areusch opened a new pull request #5592: URL: https://github.com/apache/incubator-tvm/pull/5592 This PR adds formatters so you can use TargetPtr/Val with e.g. LOG(INFO) and std::cout. Planning to add tests will after the runtime rewrite (it's possible these two classes won't be needed after that, but they're very useful now). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] tqchen commented on pull request #5590: Overestimate binary size for microTVM compiled binaries.
tqchen commented on pull request #5590: URL: https://github.com/apache/incubator-tvm/pull/5590#issuecomment-628246165 Please fix the CI error This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated: [MXNET]abs, round, reciprocal, sign, softsign, hard_sigmoid (#5587)
This is an automated email from the ASF dual-hosted git repository. masahi pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git The following commit(s) were added to refs/heads/master by this push: new 10c2b7f [MXNET]abs, round, reciprocal, sign, softsign, hard_sigmoid (#5587) 10c2b7f is described below commit 10c2b7ff58c64694c5d99ad7d85cee5bad19c4ed Author: Samuel AuthorDate: Thu May 14 02:31:36 2020 +0530 [MXNET]abs, round, reciprocal, sign, softsign, hard_sigmoid (#5587) --- python/tvm/relay/frontend/mxnet.py | 19 +++ tests/python/frontend/mxnet/test_forward.py | 4 +++- 2 files changed, 22 insertions(+), 1 deletion(-) diff --git a/python/tvm/relay/frontend/mxnet.py b/python/tvm/relay/frontend/mxnet.py index 4cb7a2a..4c3144c 100644 --- a/python/tvm/relay/frontend/mxnet.py +++ b/python/tvm/relay/frontend/mxnet.py @@ -789,6 +789,19 @@ def _mx_l2_normalize(inputs, attrs): return _op.nn.l2_normalize(inputs[0], **new_attrs) +def _mx_softsign(inputs, attrs): +return inputs[0] / (_expr.const(1.0) + _op.abs(inputs[0])) + + +def _mx_hard_sigmoid(inputs, attrs): +x = (_expr.const(0.2) * inputs[0]) + _expr.const(0.5) +return _op.clip(x, a_min=0.0, a_max=1.0) + + +def _mx_reciprocal(inputs, attrs): +return _expr.const(1.0) /inputs[0] + + def _mx_shape_array(inputs, attrs): assert len(inputs) == 1 if attrs.get_int("lhs_begin", None) is not None: @@ -1742,12 +1755,15 @@ def _mx_broadcast_logical(logical_op): # Note: due to attribute conversion constraint # ops in the identity set must be attribute free _identity_list = [ +"abs", "log", "exp", "erf", "sqrt", "floor", "ceil", +"round", +"sign", "sigmoid", "negative", "reshape_like", @@ -1856,6 +1872,9 @@ _convert_map = { "softmax" : _softmax_op(_op.nn.softmax), "log_softmax" : _softmax_op(_op.nn.log_softmax), "Softmax" : _softmax_op(_op.nn.softmax), +"softsign" : _mx_softsign, +"hard_sigmoid" : _mx_hard_sigmoid, +"reciprocal": _mx_reciprocal, # per op specialization "Reshape" : _reshape, "reshape" : _reshape, diff --git a/tests/python/frontend/mxnet/test_forward.py b/tests/python/frontend/mxnet/test_forward.py index 3fb8e30..9dd8506 100644 --- a/tests/python/frontend/mxnet/test_forward.py +++ b/tests/python/frontend/mxnet/test_forward.py @@ -365,7 +365,9 @@ def test_forward_elemwise_ops(): def test_forward_unary_ops(): -for op in ["cos", "sin", "tan", +for op in ["abs", "sqrt", "ceil", "floor", "round", "reciprocal", + "softsign", "hard_sigmoid", + "cos", "sin", "tan", "cosh", "sinh", "tanh", "arccos", "arcsin", "arctan", "arccosh", "arcsinh", "arctanh"]:
[GitHub] [incubator-tvm] masahi merged pull request #5587: [FRONTEND][MXNET]abs, round, reciprocal, sign, softsign, hard_sigmoid ops support
masahi merged pull request #5587: URL: https://github.com/apache/incubator-tvm/pull/5587 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] masahi commented on pull request #5587: [FRONTEND][MXNET]abs, round, reciprocal, sign, softsign, hard_sigmoid ops support
masahi commented on pull request #5587: URL: https://github.com/apache/incubator-tvm/pull/5587#issuecomment-628243757 Thanks @siju-samuel This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] areusch opened a new pull request #5591: Fix JSON graph dumping.
areusch opened a new pull request #5591: URL: https://github.com/apache/incubator-tvm/pull/5591 * Previously this function placed a JSON-escaped string containing the JSON-encoded graph. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] areusch commented on pull request #5591: Fix JSON graph dumping.
areusch commented on pull request #5591: URL: https://github.com/apache/incubator-tvm/pull/5591#issuecomment-628227555 cc @tqchen This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] areusch commented on a change in pull request #5590: Overestimate binary size for microTVM compiled binaries.
areusch commented on a change in pull request #5590: URL: https://github.com/apache/incubator-tvm/pull/5590#discussion_r424707211 ## File path: python/tvm/contrib/binutil.py ## @@ -166,6 +166,11 @@ def tvm_callback_get_section_size(binary_path, section_name, toolchain_prefix): # NOTE: in the past, section_size has been wrong on x86. it may be # inconsistent. TODO: maybe stop relying on `*size` to give us the size and # instead read the section with `*objcopy` and count the bytes. +# NOTE(areusch): I think the problem is due to alignment ops in the linker. +# Since this is going away in the impending switch to on-device runtime, +# add a constant to hopefully absorb these relocations. +if section_size > 0: +section_size += 32 Review comment: removed that if block, though I haven't tested riscv at all at master. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (301f515 -> 079978e)
This is an automated email from the ASF dual-hosted git repository. zhic pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from 301f515 Add a quantized conv2 unit test for the tflite front-end (#5558) add 079978e [Relay][Transform] Safe check added for Merge Composite (#5562) No new revisions were added by this update. Summary of changes: src/relay/transforms/merge_composite.cc | 1 + 1 file changed, 1 insertion(+)
[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #5562: [Relay][Transform] Safe check added for Merge Composite Call Node
ANSHUMAN87 commented on pull request #5562: URL: https://github.com/apache/incubator-tvm/pull/5562#issuecomment-628202770 Gentle ping @zhiics ! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #5578: [Relay][Refactor][std::string --> String] Relay updated with String
ANSHUMAN87 commented on pull request #5578: URL: https://github.com/apache/incubator-tvm/pull/5578#issuecomment-628201907 Conflict is resolved now, Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] kparzysz-quic opened a new pull request #5589: [Hexagon] One more fix for concurrency count
kparzysz-quic opened a new pull request #5589: URL: https://github.com/apache/incubator-tvm/pull/5589 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [incubator-tvm] ANSHUMAN87 opened a new pull request #5588: [Frontend][Tensorflow] Gather nd bug fix for one dim support in tensorflow
ANSHUMAN87 opened a new pull request #5588: URL: https://github.com/apache/incubator-tvm/pull/5588 @kazum , @FrozenGene , @kevinthesun : Please help review, Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[incubator-tvm] branch master updated (d0b15fe -> 301f515)
This is an automated email from the ASF dual-hosted git repository. anijain2305 pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from d0b15fe [RELAY][Convert Layout] Specify additional layouts in convert layout pass (#5422) add 301f515 Add a quantized conv2 unit test for the tflite front-end (#5558) No new revisions were added by this update. Summary of changes: tests/python/frontend/tflite/test_forward.py | 48 +--- 1 file changed, 37 insertions(+), 11 deletions(-)
[incubator-tvm] branch master updated (b1eb97a -> d0b15fe)
This is an automated email from the ASF dual-hosted git repository. anijain2305 pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from b1eb97a Fix the runtime raise error (#5586) add d0b15fe [RELAY][Convert Layout] Specify additional layouts in convert layout pass (#5422) No new revisions were added by this update. Summary of changes: docs/dev/convert_layout.rst | 52 ++-- include/tvm/relay/op_attr_types.h | 6 +- include/tvm/relay/transform.h | 6 +- python/tvm/relay/op/nn/_nn.py | 55 +--- python/tvm/relay/qnn/op/layout_conversions.py | 28 ++-- python/tvm/relay/transform/transform.py | 11 +- src/relay/transforms/convert_layout.cc| 28 ++-- tests/python/relay/test_pass_convert_op_layout.py | 152 -- 8 files changed, 267 insertions(+), 71 deletions(-)
[incubator-tvm] branch master updated: Fix the runtime raise error (#5586)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git The following commit(s) were added to refs/heads/master by this push: new b1eb97a Fix the runtime raise error (#5586) b1eb97a is described below commit b1eb97ac1a073145728379824c6b0ec207ca3626 Author: LiangLiu AuthorDate: Wed May 13 23:49:21 2020 +0800 Fix the runtime raise error (#5586) --- python/tvm/autotvm/measure/measure_methods.py | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/python/tvm/autotvm/measure/measure_methods.py b/python/tvm/autotvm/measure/measure_methods.py index 185ed7d..8f11a17 100644 --- a/python/tvm/autotvm/measure/measure_methods.py +++ b/python/tvm/autotvm/measure/measure_methods.py @@ -275,9 +275,8 @@ class RPCRunner(Runner): if isinstance(res, Exception): # executor error or timeout results.append(MeasureResult((str(res),), MeasureErrorNo.RUN_TIMEOUT, self.timeout, time.time())) -raise Exception(f'encountered exception during measurement: {results}') - -results.append(res) +else: +results.append(res) return results
[incubator-tvm] branch master updated: Add prim::device op (#5584)
This is an automated email from the ASF dual-hosted git repository. masahi pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git The following commit(s) were added to refs/heads/master by this push: new 2cd987d Add prim::device op (#5584) 2cd987d is described below commit 2cd987d92724be0f859bfb624ce797f9c70167bb Author: Candy <1915998...@qq.com> AuthorDate: Wed May 13 15:07:31 2020 +0800 Add prim::device op (#5584) --- python/tvm/relay/frontend/pytorch.py | 1 + 1 file changed, 1 insertion(+) diff --git a/python/tvm/relay/frontend/pytorch.py b/python/tvm/relay/frontend/pytorch.py index c7eccf7..080046b 100644 --- a/python/tvm/relay/frontend/pytorch.py +++ b/python/tvm/relay/frontend/pytorch.py @@ -1615,6 +1615,7 @@ def _wrap_const(c): def _get_convert_map(prelude): convert_map = { "aten::device" : _none(), +"prim::device" : _none(), "aten::sub" : _elemwise("subtract"), "aten::sub_": _elemwise("subtract"), "aten::max" : _elemwise("maximum"),