manupa-arm commented on a change in pull request #7785:
URL: https://github.com/apache/tvm/pull/7785#discussion_r608423833



##########
File path: python/tvm/micro/model_library_format.py
##########
@@ -132,14 +135,25 @@ def export_model_library_format(mod: 
graph_executor_factory.GraphExecutorFactory
         Path to the .tar archive to generate.
     """
     tempdir = utils.tempdir()
+    is_aot = False
+    for v in mod.target.values():
+        if v.attrs.get("executor", "graph_runtime") == "aot":
+            is_aot = True
+            break
+
+    runtime = ["graph"]
+    if is_aot:
+        runtime = ["aot"]
+
     metadata = {
         "version": 1,
         "model_name": mod.libmod_name,
         "export_datetime": datetime.datetime.now().strftime("%Y-%m-%d 
%H:%M:%SZ"),
-        "memory": _build_memory_map(mod.graph_json),
+        "memory": _build_memory_map(mod.graph),
         "target": {int(k): str(v) for k, v in mod.target.items()},
-        "runtimes": ["graph"],
+        "runtimes": runtime,

Review comment:
       @areusch I think we should deprecate/change the term "runtimes" here as 
well in MLF.

##########
File path: python/tvm/micro/model_library_format.py
##########
@@ -156,10 +170,11 @@ def export_model_library_format(mod: 
graph_executor_factory.GraphExecutorFactory
     with open(tempdir.relpath("relay.txt"), "w") as f:
         f.write(str(mod.ir_mod))
 
-    graph_config_dir_path = tempdir.relpath(os.path.join("runtime-config", 
"graph"))
-    os.makedirs(graph_config_dir_path)
-    with open(os.path.join(graph_config_dir_path, "graph.json"), "w") as f:
-        f.write(mod.graph_json)
+    if not is_aot:

Review comment:
       I think we should just use the factored out function which determines 
the runtime and moreover I think its better if we encapsulate into a function 
whether  "mod" requires config data and if so what that would be. e.g., if 
"graph" it would be the json, etc. Thus it would be extensible when configs 
needed to be added based on the executor. @areusch @giuseros  WDYT?

##########
File path: python/tvm/micro/model_library_format.py
##########
@@ -132,14 +135,25 @@ def export_model_library_format(mod: 
graph_executor_factory.GraphExecutorFactory
         Path to the .tar archive to generate.
     """
     tempdir = utils.tempdir()
+    is_aot = False
+    for v in mod.target.values():
+        if v.attrs.get("executor", "graph_runtime") == "aot":
+            is_aot = True
+            break
+
+    runtime = ["graph"]
+    if is_aot:
+        runtime = ["aot"]
+
     metadata = {
         "version": 1,
         "model_name": mod.libmod_name,
         "export_datetime": datetime.datetime.now().strftime("%Y-%m-%d 
%H:%M:%SZ"),
-        "memory": _build_memory_map(mod.graph_json),
+        "memory": _build_memory_map(mod.graph),

Review comment:
       I think we should gate this for aot executor until the 
build_memory_map's dependency on the presence of json is resolved.

##########
File path: python/tvm/micro/model_library_format.py
##########
@@ -132,14 +135,25 @@ def export_model_library_format(mod: 
graph_executor_factory.GraphExecutorFactory
         Path to the .tar archive to generate.
     """
     tempdir = utils.tempdir()
+    is_aot = False
+    for v in mod.target.values():

Review comment:
       I think we should factor this out to a separate function to find the 
"executor" and rename it as "executor".

##########
File path: python/tvm/micro/model_library_format.py
##########
@@ -86,10 +86,13 @@ def _build_memory_map(graph_json):
     list :
         A list with one entry per storage id describing that memory.
     """
-    graph = json.loads(graph_json)
+    memory_map = []
+    if graph_str.startswith("primfn"):

Review comment:
       See the discussion here as well : 
https://github.com/apache/tvm/pull/7533.
   
   I think we should not overload the graph here.
   Honestly the model library format should not be affected by which executor 
is being chosen, however its affected today because it has a reliance on the 
presence of the "json" as I've said in the discussion above.
   
   Therefore, I think we should fix that and while sorting the tensor pinning 
infra. Until then, we can leave out creating the memory map in the aot executor 
because its a limitation of the creation of model library format that has a 
dependency on the presence of the graph executor. 

##########
File path: python/tvm/relay/build_module.py
##########
@@ -287,7 +289,8 @@ def build(ir_mod, target=None, target_host=None, 
params=None, mod_name="default"
 
     with tophub_context:
         bld_mod = BuildModule()
-        graph_json, runtime_mod, params = bld_mod.build(mod=ir_mod, 
target=target, params=params)
+
+        graph, runtime_mod, params = bld_mod.build(mod=ir_mod, target=target, 
params=params)
         executor_factory = _graph_executor_factory.GraphExecutorFactoryModule(

Review comment:
       Its a bit weird we are using this in the AoT Executor. I have a feeling 
that we need an AoTExecutorFactorModule that does not have a graph. Ideally, I 
think we would need an abstract class ExecutorFactorModule that implements the 
methods that are required by export_model_library. The members such as "graph" 
should not be present in AoTExecutorFactorModule.

##########
File path: python/tvm/relay/backend/graph_executor_factory.py
##########
@@ -41,17 +41,18 @@ class GraphExecutorFactoryModule:
         The parameters of module
     """
 
-    def __init__(self, ir_mod, target, graph_json_str, libmod, libmod_name, 
params):

Review comment:
       I think we should not overload this as I've said in the other comments




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to