samskalicky commented on a change in pull request #18779:
URL: https://github.com/apache/incubator-mxnet/pull/18779#discussion_r462563043



##########
File path: example/extensions/lib_subgraph/subgraph_lib.cc
##########
@@ -331,6 +332,46 @@ REGISTER_PARTITIONER(mySelect)
 .setCreateSelector("strategy1", createSelector)
 .setReviewSubgraph("strategy1", myReviewSubgraph);
 
+/* \brief a basic pass that adds a new input for subgraph ops */
+MXReturnValue addInputPass(const std::string& in_graph, const std::string** 
out_graph,
+                          const std::unordered_map<std::string, std::string>& 
options,
+                          const std::unordered_map<std::string, MXTensor>& 
args,
+                          const std::unordered_map<std::string, MXTensor>& aux,
+                          const PassResource& res) {
+  // convert graph from JSON string to Graph/Node data structure
+  Graph *g = Graph::fromString(in_graph);
+  //find node with '_custom_subgraph_op' op type
+  for(Node* n : g->nodes) {
+    if(n->op.compare("_custom_subgraph_op") == 0) {
+      //set extra input
+      n->attrs[MX_STR_EXTRA_INPUTS] = std::to_string(1);
+      
+      //create a new input Node
+      Node* input = new Node();
+      std::string input_name = n->name + "_input";
+      input->name = input_name;
+      input->op = "null";
+      //add a new node in graph
+      g->nodes.push_back(input);
+      g->inputs.push_back(input);
+      //connect new input to node
+      input->outputs.push_back({n,(int)(n->inputs.size())});
+      //connect node to new input
+      n->inputs.push_back({input,0});
+      // add a corresponding tensor for this input
+      MXTensor* arg_ = 
res.alloc_arg(input_name,{1},MXContext::CPU(0),kFloat32);
+    }
+  }
+
+  //convert back to JSON string from Graph/Node
+  *out_graph = new std::string(g->toString());
+  return MX_SUCCESS;
+}
+
+REGISTER_PASS(addInputPass)

Review comment:
       Graph passes usage through optimize_for is already merged in #17885. 
We're not changing how graph passes work in this PR

##########
File path: example/extensions/lib_subgraph/subgraph_lib.cc
##########
@@ -331,6 +330,46 @@ REGISTER_PARTITIONER(mySelect)
 .setCreateSelector("strategy1", createSelector)
 .setReviewSubgraph("strategy1", myReviewSubgraph);
 
+/* \brief a basic pass that adds a new input for subgraph ops */
+MXReturnValue addInputPass(const std::string& in_graph, const std::string** 
out_graph,

Review comment:
       To add a new input you have to do 2 things:
   1. Create the new node and add it to the graph:
   ```
   Node* input = new Node();
   input->name = "my_new_input";
   input->op = "null";
   g->nodes.push_back(input);
   g->inputs.push_back(input);
   ```
   2. Create a new tensor to correspond to the new node:
   ```
   MXTensor* arg_ = 
res.alloc_arg("my_new_input",{1},MXContext::CPU(0),kFloat32);
   ```
   Notice that the node name and tensor name must match. 
   
   > The new input has unknown and arbitrary tensor shape.
   > Do we handle that somewhere ?
   > 
   
   In step 2 above you specify the shape when creating the tensor. In the 
example its `{1}`.
   
   > Also is this added to both the
   > a) subgraph_op and
   > b) the subgraph attr of the subgraph_op ?
   > 
   > We probably do not want that in the latter.
   > 
   
   After creating the node in the graph and the corresponding tensor, you'll 
probably want to connect that node to your subgraph node to do anything 
interesting with it like:
   ```
   //connect new input to node
   input->outputs.push_back({n,(int)(n->inputs.size())});
   //connect node to new input
   n->inputs.push_back({input,0});
   ```
   Where `n` is your subgraph node. 
   
   Also, you'll need to set the attr on your subgraph op to make sure this 
input doesnt participate in shape/type inference like: 
`n->attrs[MX_STR_EXTRA_INPUTS] = std::to_string(1);`
   
   > Also how is shape handled ?
   > The input tensor can be an take arbitrary and unknown shape at this point 
in the flow.
   > Once known, this shape is unknown to user and needs to be remembered.
   > Therefore, this op likely needs the shape attr with a byte dtype.
   > Do these already work with this update ? or is it upto the user to add ?
   
   The example above shows setting `kFloat32` but you can also set:
   ```
   enum MXDType {
     kFloat32 = 0,
     kFloat64 = 1,
     kFloat16 = 2,
     kUint8 = 3,
     kInt32 = 4,
     kInt8  = 5,
     kInt64 = 6,
   };
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to