giuseros commented on a change in pull request #7785:
URL: https://github.com/apache/tvm/pull/7785#discussion_r608168664
##########
File path: python/tvm/micro/model_library_format.py
##########
@@ -86,10 +86,13 @@ def _build_memory_map(graph_json):
list :
A list with one entry per storage id describing that memory.
"""
- graph = json.loads(graph_json)
+ memory_map = []
+ if graph_str.startswith("primfn"):
Review comment:
So, ok.The main point here is that I was trying to unify a "graph"
representation. When we do: `graph, runtime_mod, params =
bld_mod.build(mod=ir_mod, target=target, params=params)`:
* If we are using graph executor `graph` is a string containing the json
* If we are using aot executor `graph` is a string containing the string
representation of the IRModule (containing the the `tvm__run_func` PrimFunc)
That's why there is that check. The graph string will represent two
different things. In the case of the JSON I am extracting the memory map, while
in case of AOT I am simply returning an empty list. Two questions:
* Should we return a memory map for AOT as well? I thought that was used
mostly by the graph executor
* Should I move the logic to produce the memory map or not into
`export_model_library_format`? I.e., something like: `if is_aot memory_map=[]
else memory_map=_build_memoryMap(mod.graph)`
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]