[GitHub] [incubator-mxnet] waytrue17 commented on a change in pull request #18690: [WIP] optimize graph in presence of dynamic shape ops

2020-07-26 Thread GitBox


waytrue17 commented on a change in pull request #18690:
URL: https://github.com/apache/incubator-mxnet/pull/18690#discussion_r460610246



##
File path: src/operator/subgraph/build_subgraph.cc
##
@@ -360,6 +405,17 @@ void SelectSubgraphNodes(nnvm::Graph* g, 
SubgraphSelectorV2Ptr subgraph_selector
 // filter out unqualified pre-selected nodes
 std::vector filtered_nodes = 
subgraph_selector->Filter(preselected_nodes);
 
+const SubgraphPropertyPtr& subg_prop = 
g->GetAttr("subgraph_property");
+if (subg_prop->HasAttr("ensure_CachedOp_input")
+&& subg_prop->GetAttr("ensure_CachedOp_input")) {
+  // check if subgraph has external input.
+  // if not, reject the first op (in top order) from the subgraph
+  // to make sure CachedOp gets external input.
+  if (filtered_nodes.size() > 0 && !HasInputEntries(*g, simple_nodes, 
filtered_nodes)) {
+filtered_nodes.erase(filtered_nodes.begin());

Review comment:
   Moved `ensure_CachedOp_input` inside `PreSelectSubgraphNodes`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] waytrue17 commented on a change in pull request #18690: [WIP] optimize graph in presence of dynamic shape ops

2020-07-26 Thread GitBox


waytrue17 commented on a change in pull request #18690:
URL: https://github.com/apache/incubator-mxnet/pull/18690#discussion_r460585378



##
File path: src/operator/subgraph/build_subgraph.cc
##
@@ -360,6 +405,17 @@ void SelectSubgraphNodes(nnvm::Graph* g, 
SubgraphSelectorV2Ptr subgraph_selector
 // filter out unqualified pre-selected nodes
 std::vector filtered_nodes = 
subgraph_selector->Filter(preselected_nodes);
 
+const SubgraphPropertyPtr& subg_prop = 
g->GetAttr("subgraph_property");
+if (subg_prop->HasAttr("ensure_CachedOp_input")
+&& subg_prop->GetAttr("ensure_CachedOp_input")) {
+  // check if subgraph has external input.
+  // if not, reject the first op (in top order) from the subgraph
+  // to make sure CachedOp gets external input.
+  if (filtered_nodes.size() > 0 && !HasInputEntries(*g, simple_nodes, 
filtered_nodes)) {
+filtered_nodes.erase(filtered_nodes.begin());

Review comment:
   Thanks for bringing this up! I agreed that we should run the filter 
function after removing first node. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] waytrue17 commented on a change in pull request #18690: [WIP] optimize graph in presence of dynamic shape ops

2020-07-26 Thread GitBox


waytrue17 commented on a change in pull request #18690:
URL: https://github.com/apache/incubator-mxnet/pull/18690#discussion_r460585378



##
File path: src/operator/subgraph/build_subgraph.cc
##
@@ -360,6 +405,17 @@ void SelectSubgraphNodes(nnvm::Graph* g, 
SubgraphSelectorV2Ptr subgraph_selector
 // filter out unqualified pre-selected nodes
 std::vector filtered_nodes = 
subgraph_selector->Filter(preselected_nodes);
 
+const SubgraphPropertyPtr& subg_prop = 
g->GetAttr("subgraph_property");
+if (subg_prop->HasAttr("ensure_CachedOp_input")
+&& subg_prop->GetAttr("ensure_CachedOp_input")) {
+  // check if subgraph has external input.
+  // if not, reject the first op (in top order) from the subgraph
+  // to make sure CachedOp gets external input.
+  if (filtered_nodes.size() > 0 && !HasInputEntries(*g, simple_nodes, 
filtered_nodes)) {
+filtered_nodes.erase(filtered_nodes.begin());

Review comment:
   Thanks for bringing this up! I agreed that the filter function should've 
been called after removing first node, not before. I'll include the change in 
the next commit





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] waytrue17 commented on a change in pull request #18690: [WIP] optimize graph in presence of dynamic shape ops

2020-07-26 Thread GitBox


waytrue17 commented on a change in pull request #18690:
URL: https://github.com/apache/incubator-mxnet/pull/18690#discussion_r460555053



##
File path: src/operator/subgraph/build_subgraph.cc
##
@@ -360,6 +405,17 @@ void SelectSubgraphNodes(nnvm::Graph* g, 
SubgraphSelectorV2Ptr subgraph_selector
 // filter out unqualified pre-selected nodes
 std::vector filtered_nodes = 
subgraph_selector->Filter(preselected_nodes);
 
+const SubgraphPropertyPtr& subg_prop = 
g->GetAttr("subgraph_property");
+if (subg_prop->HasAttr("ensure_CachedOp_input")
+&& subg_prop->GetAttr("ensure_CachedOp_input")) {
+  // check if subgraph has external input.
+  // if not, reject the first op (in top order) from the subgraph
+  // to make sure CachedOp gets external input.
+  if (filtered_nodes.size() > 0 && !HasInputEntries(*g, simple_nodes, 
filtered_nodes)) {
+filtered_nodes.erase(filtered_nodes.begin());

Review comment:
   Yes, this is handled by the filter function in our subgraph property. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] waytrue17 commented on a change in pull request #18690: [WIP] optimize graph in presence of dynamic shape ops

2020-07-23 Thread GitBox


waytrue17 commented on a change in pull request #18690:
URL: https://github.com/apache/incubator-mxnet/pull/18690#discussion_r459832436



##
File path: src/operator/subgraph/static_shape_subgraph_property.cc
##
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "./common.h"
+#include "./subgraph_property.h"
+#include "../../imperative/cached_op.h"
+
+namespace mxnet {
+namespace op {
+
+/*
+ * This selects nodes for a subgraph that only contains static shape operators
+ * and it visits nodes via both input and output links.
+ */
+class StaticShapeOpSelector: public SubgraphSelector {
+ public:
+  virtual bool Select(const nnvm::Node _node) {
+const auto& infershape = 
nnvm::Op::GetAttr("FInferShape");
+return !seed_node.is_variable() && infershape.count(seed_node.op());
+  }
+
+  virtual bool SelectInput(const nnvm::Node _node, const nnvm::Node 
_node) {
+const auto& infershape = 
nnvm::Op::GetAttr("FInferShape");
+return !input_node.is_variable() && infershape.count(input_node.op());
+  }
+
+  virtual bool SelectOutput(const nnvm::Node _node, const nnvm::Node 
_node) {
+const auto& infershape = 
nnvm::Op::GetAttr("FInferShape");
+return !output_node.is_variable() && infershape.count(output_node.op());
+  }

Review comment:
   added the single node rejection mechanism in the property filter 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] waytrue17 commented on a change in pull request #18690: [WIP] optimize graph in presence of dynamic shape ops

2020-07-17 Thread GitBox


waytrue17 commented on a change in pull request #18690:
URL: https://github.com/apache/incubator-mxnet/pull/18690#discussion_r456657714



##
File path: src/operator/subgraph/static_shape_subgraph_property.cc
##
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "./common.h"
+#include "./subgraph_property.h"
+#include "../../imperative/cached_op.h"
+
+namespace mxnet {
+namespace op {
+
+/*
+ * This selects nodes for a subgraph that only contains static shape operators
+ * and it visits nodes via both input and output links.
+ */
+class StaticShapeOpSelector: public SubgraphSelector {
+ public:
+  virtual bool Select(const nnvm::Node _node) {
+const auto& infershape = 
nnvm::Op::GetAttr("FInferShape");
+return !seed_node.is_variable() && infershape.count(seed_node.op());
+  }
+
+  virtual bool SelectInput(const nnvm::Node _node, const nnvm::Node 
_node) {
+const auto& infershape = 
nnvm::Op::GetAttr("FInferShape");
+return !input_node.is_variable() && infershape.count(input_node.op());
+  }
+
+  virtual bool SelectOutput(const nnvm::Node _node, const nnvm::Node 
_node) {
+const auto& infershape = 
nnvm::Op::GetAttr("FInferShape");
+return !output_node.is_variable() && infershape.count(output_node.op());
+  }
+};
+
+/*
+ * This subgraph property finds a subgraph whose nodes have only static shape 
operators.
+ * The operators in the subgraph will be executed by _CachedOp.
+ */
+class StaticShapeSubgraphProperty: public SubgraphProperty {
+ public:
+  static SubgraphPropertyPtr Create() { return 
std::make_shared(); }
+
+  // the criteria of selecting the subgraph nodes
+  virtual SubgraphSelectorPtr CreateSubgraphSelector() const {
+return std::make_shared();
+  }
+
+  // create an nnvm node for a given subgraph
+  virtual nnvm::ObjectPtr CreateSubgraphNode(const nnvm::Symbol ,
+ const int subgraph_id = 0) const {
+nnvm::ObjectPtr n = nnvm::Node::Create();
+n->attrs.op = Op::Get("_CachedOp");

Review comment:
   Makes sense, will add it





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] waytrue17 commented on a change in pull request #18690: [WIP] optimize graph in presence of dynamic shape ops

2020-07-17 Thread GitBox


waytrue17 commented on a change in pull request #18690:
URL: https://github.com/apache/incubator-mxnet/pull/18690#discussion_r456570278



##
File path: python/mxnet/symbol/symbol.py
##
@@ -2627,6 +2628,15 @@ def detach(self):
 def backward(self):
 raise NotImplementedForSymbol(self.backward, None)
 
+def optimize_for_dynamic_shape_op(self):
+"""Check if any dynamic shape op presents in the symbol, if yes, 
partition all static shape ops for optimization
+returns the optimized symbol.
+"""
+out = SymbolHandle()
+check_call(_LIB.MXOptimizeForDynamicShapeOp(self.handle, 
ctypes.byref(out)))
+from .numpy import _Symbol as np_symbol

Review comment:
   I tried to put it on the top. It causes import loop error

##
File path: python/mxnet/symbol/symbol.py
##
@@ -1470,6 +1470,10 @@ def optimize_for(self, backend, args=None, aux=None, 
ctx=None,
 ctx : Context, optional
 Device context, used to infer stypes
 
+is_np_sym : boolean, optional
+Output symbol type
+- If true, output type is np symbol, otherwise nd symbol.
+

Review comment:
   Will be changing it

##
File path: python/mxnet/symbol/symbol.py
##
@@ -2627,6 +2633,24 @@ def detach(self):
 def backward(self):
 raise NotImplementedForSymbol(self.backward, None)
 
+def optimize_for_dynamic_shape_op(self, is_np_sym=False):

Review comment:
   Will change to private function





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org