huajsj commented on a change in pull request #8497:
URL: https://github.com/apache/tvm/pull/8497#discussion_r681190187



##########
File path: src/runtime/graph_executor/graph_executor.cc
##########
@@ -358,11 +415,16 @@ void GraphExecutor::SetupStorage() {
 void GraphExecutor::SetupOpExecs() {
   op_execs_.resize(this->GetNumOfNodes());
   input_dltensors_.resize(num_node_entries());
+  output_dltensors_.resize(num_node_entries());
   std::unordered_set<uint32_t> input_node_eids;
   for (size_t i = 0; i < input_nodes_.size(); i++) {
     uint32_t nid = input_nodes_[i];
     input_node_eids.insert(entry_id(nid, 0));
   }
+  std::unordered_set<uint32_t> output_node_id;
+  for (size_t i = 0; i < outputs_.size(); i++) {
+    output_node_id.insert(outputs_[i].node_id);

Review comment:
       should here use entry_id?

##########
File path: src/runtime/graph_executor/graph_executor.cc
##########
@@ -384,9 +446,15 @@ void GraphExecutor::SetupOpExecs() {
 
     for (size_t i = 0; i < inode.inputs.size(); i++) {
       uint32_t eid = this->entry_id(inode.inputs[i]);
-      // check if op input is model input
-      if (input_node_eids.count(eid) > 0) {
-        
input_dltensors_[eid].push_back(static_cast<DLTensor*>(op_args->arg_values[i].v_handle));

Review comment:
       here seems like still need a condition check that if the node in inputs 
is model input or output, without such check all node in inputs would get push 
into input_dltensors_ and logically incorrect.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to