manupa-arm commented on a change in pull request #8014:
URL: https://github.com/apache/tvm/pull/8014#discussion_r634900775



##########
File path: include/tvm/runtime/module.h
##########
@@ -231,7 +231,7 @@ constexpr const char* tvm_param_prefix = "__tvm_param__";
 /*! \brief A PackedFunc that looks up linked parameters by storage_id. */
 constexpr const char* tvm_lookup_linked_param = "_lookup_linked_param";
 /*! \brief The main AOT executor function */
-constexpr const char* tvm_run_func_prefix = "tvm__run_func";
+constexpr const char* tvm_run_func_suffix = "run_func";

Review comment:
       I think we should drop "func" because it adds little value as a name of 
the function.
   
   cc : @areusch 

##########
File path: src/relay/backend/aot_executor_codegen.cc
##########
@@ -374,13 +374,14 @@ class AOTExecutorCodegen : public ExprVisitor {
     }
 
     auto pf0 = GetPackedFunc("relay.backend._make_CCacheKey");
-    auto pf1 = GetPackedFunc("relay.backend._CompileEngineLower");
+    auto pf1 = 
GetPackedFunc("relay.backend._CompileEngineLowerWithModuleName");

Review comment:
       Why do we need a separate function ? Isnt it possible to improve 
_CompileEngineLower to accept module name generally ?

##########
File path: src/relay/backend/compile_engine.cc
##########
@@ -612,11 +613,20 @@ class MakeShapeFunc : public 
backend::MemoizedExprTranslator<Array<te::Tensor>>
 class CompileEngineImpl : public CompileEngineNode {
  public:
   // Lower the function.
-  CachedFunc Lower(const CCacheKey& key) { return 
LowerInternal(key)->cached_func; }
+  CachedFunc Lower(const CCacheKey& key) {

Review comment:
       I think we can have just one Lower function -- is there a reason why 
mangling the name is bad ?
   
   cc : @areusch 

##########
File path: python/tvm/relay/build_module.py
##########
@@ -145,7 +150,10 @@ def build(self, mod, target=None, target_host=None, 
params=None, executor="graph
         old_autotvm_silent = autotvm.GLOBAL_SCOPE.silent
         autotvm.GLOBAL_SCOPE.silent = use_auto_scheduler
 
-        self._build(mod, target, target_host, executor)
+        if not mod_name:

Review comment:
       Why cant the default be the empty string itself?

##########
File path: python/tvm/micro/model_library_format.py
##########
@@ -55,17 +58,19 @@ def _populate_codegen_dir(mod, codegen_dir: str):
 
     mod_indices = {"lib": 0, "src": 0}
     host_codegen_dir = os.path.join(codegen_dir, "host")
+    lib_name = f"{module_name}_lib" if module_name else "lib"

Review comment:
       Maybe we can make the default a empty string and append it always ?

##########
File path: python/tvm/relay/transform/transform.py
##########
@@ -726,6 +726,18 @@ def PartitionGraph():
     return _ffi_api.PartitionGraph()
 
 
+def MangleGraph(mod_name):

Review comment:
       nit : I think this is name mangling the functions, right ? We might need 
a name that indicates it is name mangling the function names. What do others 
think ? 
   
   cc : @areusch @Mousius 

##########
File path: python/tvm/relay/build_module.py
##########
@@ -70,6 +70,20 @@ def _convert_param_map(params):
     return inputs
 
 
+def _is_valid_modname(mod_name):
+    """Determine if mod_name is a valid string to use inside
+    function names
+    """
+    if mod_name:
+        try:
+            mod_name.encode("ascii")
+            return True
+        except e:
+            return False
+
+    return True

Review comment:
       Why do we need this to be true if mod_name is None ?

##########
File path: tests/python/relay/aot/test_crt_aot.py
##########
@@ -348,7 +348,9 @@ def test_byoc_utvm(use_calculated_workspaces):
     mod = tvm.IRModule()
     ann = CcompilerAnnotator()
     mod["main"] = ann.visit(f)
+
     mod = tvm.relay.transform.PartitionGraph()(mod)
+    mod = tvm.relay.transform.MangleGraph("my_mod")(mod)

Review comment:
       Is there a use case, we would not want to name mangle ? Im think name 
mangled function names are strictly superior. If thats the case, I dont like 
the idea having to run another pass for all BYOC codegens. 
   
   Seeing the previous conversation with @areusch , I agree that we dont need 
another PartitionGraphWithModName but also not sure we need to call a seperate 
pass explicitly either. My suggestion is we should name mangle by default 
otherwise the compilation pathway feels bit fragile.
   
   At a minimum, we should move this pass as a sub-pass of PartitionGraph and 
call it by default. 
   This comment also has a synergy with why we need LowerWithModName in compile 
engine. 
   
   I think Im missing whats the value of not performing name mangling when user 
do provide with a mod_name.

##########
File path: src/relay/backend/compile_engine.h
##########
@@ -202,6 +202,13 @@ class CompileEngineNode : public Object {
    * \return The result.
    */
   virtual CachedFunc Lower(const CCacheKey& key) = 0;
+  /*!
+   * \brief Get lowered result and mangle functions names with a specific 
module name.
+   * \param key The key to the cached function.
+   * \param mod_name The module name to mangle the functions
+   * \return The result.
+   */
+  virtual CachedFunc LowerWithModuleName(const CCacheKey& key, const String 
mod_name) = 0;

Review comment:
       Same as above, Im missing the requirement to have a whole new function.

##########
File path: src/relay/backend/compile_engine.cc
##########
@@ -612,11 +613,20 @@ class MakeShapeFunc : public 
backend::MemoizedExprTranslator<Array<te::Tensor>>
 class CompileEngineImpl : public CompileEngineNode {
  public:
   // Lower the function.
-  CachedFunc Lower(const CCacheKey& key) { return 
LowerInternal(key)->cached_func; }
+  CachedFunc Lower(const CCacheKey& key) {
+    auto mangle_fn = [](String name) { return name; };
+    return LowerInternal(key, mangle_fn)->cached_func;
+  }
+
+  CachedFunc LowerWithModuleName(const CCacheKey& key, const String mod_name) {

Review comment:
       Same comment as above




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to