mkolod commented on a change in pull request #11325: [MXNET-703] TensorRT 
runtime integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r205860313
 
 

 ##########
 File path: src/executor/graph_executor.cc
 ##########
 @@ -941,6 +970,114 @@ void GraphExecutor::FinishInitGraph(nnvm::Symbol symbol,
   this->InitOpSegs();
 }
 
+/*!
+ * \brief This function is triggered after each tensorrt subgraph replacement 
pass.
+ * Reset arguments of GraphExecutor::Init(...) as some variables (weights and 
biases)
+ * are absorbed into the TRT engine it also it rerun attributes inferences 
accordingly
+ * to the new topology.
+ */
+Graph GraphExecutor::ReinitGraph(Graph&& g, const Context &default_ctx,
+                                 const std::map<std::string, Context> &ctx_map,
+                                 std::vector<Context> *in_arg_ctxes,
+                                 std::vector<Context> *arg_grad_ctxes,
+                                 std::vector<Context> *aux_state_ctxes,
+                                 std::vector<OpReqType> *grad_req_types,
+                                 std::unordered_map<std::string, TShape> 
*arg_shape_map,
+                                 std::unordered_map<std::string, int> 
*arg_dtype_map,
+                                 std::unordered_map<std::string, int> 
*arg_stype_map,
+                                 std::unordered_map<std::string, NDArray> 
*params_map) {
+  std::unordered_set<std::string> to_remove_params;
+  for (auto& el : *params_map) {
+    to_remove_params.insert(el.first);
+  }
+
+  DFSVisit(g.outputs, [&to_remove_params](const nnvm::NodePtr n) {
+    to_remove_params.erase(n->attrs.name);
+  });
+
+  for (auto& el : to_remove_params) {
+    params_map->erase(el);
+    arg_shape_map->erase(el);
+    arg_dtype_map->erase(el);
+    arg_stype_map->erase(el);
+  }
+  const auto &idx = g.indexed_graph();
+  num_forward_inputs_ = idx.input_nodes().size();
+  in_arg_ctxes->resize(num_forward_inputs_ - idx.mutable_input_nodes().size());
 
 Review comment:
   @zheng-da Consider any network, such as VGG, ResNet, etc. For any subgraph 
that is extracted by the TensorRT pass, the weights need to be provided to 
TensorRT at TensorRT engine construction time. These weights then become "baked 
in" the engine. Once the subgraph is substituted by a TensorRT node, these 
graph inputs become part of the TensorRT engine and are no longer used by the 
NNVM graph explicitly. Hence, they need to be removed, in order not to waste 
memory, and to prevent the confusion where some inputs still exists in the NNVM 
graph, but are not used anymore.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to