masahi edited a comment on pull request #10578:
URL: https://github.com/apache/tvm/pull/10578#issuecomment-1065601359
> Another improvement would be to remove all metascheduler/auotscheduler
depenednt code from te_compiler. te_compiler could return a list of tasks and
the metascheduler/autoscheduler could handle it as they please
This is basically my goal of this PR, and already achieved for meta
scheduler. There is no longer meta scheduler-specific code path for task
extraction in `te_compiler_cache.cc`. Most of the code in `task_extraction.cc`,
in particular
```
Array<ExtractedTask> ExtractTask(IRModule mod, Target target, Map<String,
Constant> params) {
backend::BindParamsInModule(mod, params);
// is_vm=true for backward compatibility
Array<Pass> pass_seqs =
relay::backend::GetPassPrefix(/*is_homogenous=*/true, /*is_vm=*/true);
pass_seqs.push_back(transform::FuseOps());
transform::Sequential seq(pass_seqs);
auto opt_mod = seq(std::move(mod));
Array<ExtractedTask> tasks;
std::unordered_set<tec::CCacheKey> cache_;
std::unordered_map<std::string, int> name_map;
PostOrderVisit(opt_mod->Lookup("main"), [target, &tasks, &cache_,
&name_map](const Expr& exp) {
if (exp->IsInstance<FunctionNode>()) {
Function relay_func = Downcast<Function>(exp);
tec::CCacheKey cache_key(relay_func, target);
if (relay_func->HasNonzeroAttr(attr::kPrimitive) &&
cache_.find(cache_key) == cache_.end()) {
Array<te::Tensor> outputs;
std::string fused_name;
std::tie(outputs, fused_name) =
tec::LowerTECompute(relay_func, target, /*return_inputs*/
true);
...
```
are not specific to any tuners and thus can be shared. Different tuners can
customize the behavior as they please, e.g. how to create "task" objects from
lowered TE compute.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]