giuseros commented on a change in pull request #7785:
URL: https://github.com/apache/tvm/pull/7785#discussion_r617752937
##########
File path: src/relay/backend/build_module.cc
##########
@@ -473,23 +517,25 @@ class RelayBuildModule : public runtime::ModuleNode {
// Relay IRModule -> IRModule optimizations.
relay_module = Optimize(relay_module, targets_, params);
+
// Get the updated function.
auto func = Downcast<Function>(relay_module->Lookup("main"));
// Generate code for the updated function.
- graph_codegen_ = std::unique_ptr<GraphCodegen>(new GraphCodegen());
- graph_codegen_->Init(nullptr, targets_);
- graph_codegen_->Codegen(func);
-
- ret_.graph_json = graph_codegen_->GetJSON();
- ret_.params = graph_codegen_->GetParams();
+ const String executor_str =
+ target_host->GetAttr<String>("executor").value_or(kTvmExecutorGraph);
Review comment:
Ups, I didn't update the page, and didn't see the comments :) @areusch I
was seeing a failure in a gpu test. The reason was the following:
* In order to understand what `FactoryModule` I need to use, I query the
`get_executor_type()` method of the `BuildModule` c++ object.
* `get_executor_type` is querying the `target_host`, and then does:
`executor_str =
target_host->GetAttr<String>("executor").value_or(kTvmExecutorGraph)`
* In case of `target = "cuda"` the `target_host` is not defined and it was
crashing.
Two possible solutions:
* I can work around that by checking for `target_host.defined()` and default
to "graph".
* I can follow what Manupa suggests and move the `executor` as a build
parameter
As I said, the change is not invasive, because the only place where I need
the executor is within `build_module.cc`. Please, have a look and let me know
if you prefer this or revert to a target option.
My opinion is that, since it looks like not an invasive change, we can have
it as a build parameter in this PR. What do you think?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]