areusch commented on a change in pull request #7785:
URL: https://github.com/apache/tvm/pull/7785#discussion_r608904060



##########
File path: python/tvm/micro/model_library_format.py
##########
@@ -156,10 +170,11 @@ def export_model_library_format(mod: 
graph_executor_factory.GraphExecutorFactory
     with open(tempdir.relpath("relay.txt"), "w") as f:
         f.write(str(mod.ir_mod))
 
-    graph_config_dir_path = tempdir.relpath(os.path.join("runtime-config", 
"graph"))
-    os.makedirs(graph_config_dir_path)
-    with open(os.path.join(graph_config_dir_path, "graph.json"), "w") as f:
-        f.write(mod.graph_json)
+    if not is_aot:

Review comment:
       @manupa-arm just to clarify, do you mean factor out this logic into a 
graph-specific function and invoke based on the executor type? if so I agree w/ 
that. I'm not convinced AOT will always require 0 configuration, particularly 
given conversation in 
https://discuss.tvm.apache.org/t/mapping-tensorir-te-to-heterogenous-systems/9617




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to