sunjiweiswift commented on a change in pull request #8497:
URL: https://github.com/apache/tvm/pull/8497#discussion_r678244681
##########
File path: src/runtime/graph_executor/graph_executor.cc
##########
@@ -384,9 +446,15 @@ void GraphExecutor::SetupOpExecs() {
for (size_t i = 0; i < inode.inputs.size(); i++) {
uint32_t eid = this->entry_id(inode.inputs[i]);
- // check if op input is model input
- if (input_node_eids.count(eid) > 0) {
-
input_dltensors_[eid].push_back(static_cast<DLTensor*>(op_args->arg_values[i].v_handle));
Review comment:
* A B
* \ /
* elemwise_add(out0)
* \
* C copy
* \ /
* elemwise_sub(out1)
set_output_zero_copy(out0), we also should change the node("copy") input
memory address.
I use input_dltensors_ to change node("copy") input mem address.
This check only support model input node mem addr insert to the
input_dltensors_, but the node("Copy") is not model input node
##########
File path: src/runtime/graph_executor/graph_executor.h
##########
@@ -398,8 +420,12 @@ class TVM_DLL GraphExecutor : public ModuleNode {
std::vector<uint32_t> input_nodes_;
/*! \brief Map of input names to input indices. */
std::unordered_map<std::string, uint32_t> input_map_;
+ /*! \brief Map of input names to output indices. */
Review comment:
fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]