manupa-arm commented on a change in pull request #7304:
URL: https://github.com/apache/tvm/pull/7304#discussion_r570889644
##########
File path: python/tvm/driver/tvmc/compiler.py
##########
@@ -191,22 +198,21 @@ def compile_model(
if use_autoscheduler:
with auto_scheduler.ApplyHistoryBest(tuning_records):
- with tvm.transform.PassContext(
- opt_level=3, config={"relay.backend.use_auto_scheduler":
True}
- ):
+ config["relay.backend.use_auto_scheduler"] = True
+ with tvm.transform.PassContext(opt_level=3, config=config):
logger.debug("building relay graph with autoscheduler")
graph_module = relay.build(
mod, target=target, params=params,
target_host=target_host
)
else:
with autotvm.apply_history_best(tuning_records):
- with tvm.transform.PassContext(opt_level=3):
+ with tvm.transform.PassContext(opt_level=3, config=config):
logger.debug("building relay graph with tuning records")
graph_module = relay.build(
mod, tvm_target, params=params, target_host=target_host
)
else:
- with tvm.transform.PassContext(opt_level=3):
+ with tvm.transform.PassContext(opt_level=3, config=config):
Review comment:
Sorry if I was not clear before :)
Well I was referring to opts that is parsed from the CLI and not questioning
the ablity of PassContext accepting the config options.
I think its within the scope of this patch to introduce CLI parsing of
additional opts that goes as config of the PassContext. So we might want to
make sure what the user gives as opts to the CLI ends up in the PassContext
correctly.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]