FrozenGene commented on a change in pull request #4564: [Doc] Introduction to module serialization URL: https://github.com/apache/incubator-tvm/pull/4564#discussion_r361857238
########## File path: docs/dev/introduction_to_module_serialization.rst ########## @@ -0,0 +1,211 @@ +.. Licensed to the Apache Software Foundation (ASF) under one + or more contributor license agreements. See the NOTICE file + distributed with this work for additional information + regarding copyright ownership. The ASF licenses this file + to you under the Apache License, Version 2.0 (the + "License"); you may not use this file except in compliance + with the License. You may obtain a copy of the License at + +.. http://www.apache.org/licenses/LICENSE-2.0 + +.. Unless required by applicable law or agreed to in writing, + software distributed under the License is distributed on an + "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + KIND, either express or implied. See the License for the + specific language governing permissions and limitations + under the License. + +Introduction to Module Serialization +==================================== + +When to deploy TVM runtime module, no matter it is CPU or GPU, TVM only needs one single DLL. +The key is our unified module serialization mechanism. This document will introduce TVM module +serialization format standard and implementation details. + +*************************************** +Module Export Example +*************************************** + +Let us build one resnet18 workload for GPU as our example firstly. + +.. code:: python + + from tvm import relay + from tvm.relay import testing + from tvm.contrib import util + import tvm + + # Resnet18 workload + resnet18_mod, resnet18_params = relay.testing.resnet.get_workload(num_layers=18) + + # build + with relay.build_config(opt_level=3): + _, resnet18_lib, _ = relay.build_module.build(resnet18_mod, "cuda", params=resnet18_params) + + # create one tempory directory + temp = util.tempdir() + + # path lib + file_name = "deploy.so" + path_lib = temp.relpath(file_name) + + # export library + resnet18_lib.export_library(path_lib) + + # load it back + loaded_lib = tvm.module.load(path_lib) + assert loaded_lib.type_key == "library" + assert loaded_lib.imported_modules[0].type_key == "cuda" + + +************** +Serialization +************** + +The entrance API is ``export_library`` of ``tvm.module.Module``. +Inside this function, we will do the following steps: + +1. Collect all DSO modules (LLVM module or C module) + + +2. If we have DSO modules, we will call ``save`` function to save them into files. + + +3. Next, we will check whether we have imported modules. Like CUDA, + OpenCL or anything else, we don't restrict the module type here. + If we have imported modules, we will create one file named as ``dev.cc`` + (so that we could compile into one dll), then call one function + ``_PackImportsToC`` to do module serialization. + + +4. Finally, we use ``fcompile`` to call ``_cc.create_shared`` to get + dll. + +*************************************************** +Under the Hood of Serialization and Format Standard +*************************************************** + +As said before, we will do the serialization work in ``_PackImportsToC``. +Inside this function, we firstly construct one helper class ``ModuleSerializer``. +It will take ``module`` to do some initialize work, like marking module index. +Then we could use its ``SerializeModule`` to serialize module. + +For the sake of understanding better, let us dig the implementation of this class a little deeper. + +When we construct ``ModuleSerializer``, we could see this code: + +.. code:: c++ + + explicit ModuleSerializer(runtime::Module mod) : mod_(mod) { + Init(); + } + private: + void Init() { + CreateModuleIndex(); + CreateImportTree(); + } + +In ``CreateModuleIndex()``, We will inspect module import relationship +using DFS and create index for them. Here, we have one invariance: root +module is always at location 0. In our example, we have module relationship: +``LLVM Module <- CUDA Module.`` So LLVM module will have index 0, CUDA +module will have index 1. + +After constructing module index, we will try to construct import tree (``CreateImportTree()``), +which will be used for restore module import relationship when we load +the exported library back. In our design, we use CSR format to store +import tree, each row is parent index, the child indices is its children +index. In code, we use ``import_tree_row_ptr_`` and +``import_tree_child_indices_`` to represent them. + +After initialization, we could serialize module using ``SerializeModule`` function. +In its function logic, we will assume the serialization format like this: + +.. code:: c++ + + binary_blob_size + binary_blob_type_key + binary_blob_logic + ... + _import_tree + _import_tree_logic + +``binary_blob_size`` is how many blobs we will have in this +serialization step. In our example, the number will equal to 3. One for +LLVM module, one for CUDA module, one for ``_import_tree``. Review comment: Firstly, when we load it back, we will do the follow logic: ```cpp for(int i= 0; i < binary_blob_size; ++i) { Read(key); do something for this key(like call module.loadbinary_tkey or read _import_tree_row_ptr / _import_tree_child_indices) }``` . So if we don’t have one slot for _import_tree (i.e. don’t keep the same blob layout as before) ,we can not read it back smoothly. So, you could imagine we have one module, whose type key is _import_tree, its serialization logic is to save _import_tree_row_ptr / _import_tree_child_indices. Secondly, _import_tree will indicate our library (deploy.so) is using new export mechanism and we could use _import_tree to reconstruct import relationship. If we don’t have this key, we will fallback to old behavior. This design could make us keep backward compatible. That is to say, in the new runtime system, your old exported library could work perfectly too. Does it make sense to you? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
