[GitHub] [tvm-rfcs] areusch commented on a change in pull request #46: Module Based Model Runtime for AOT
areusch commented on a change in pull request #46: URL: https://github.com/apache/tvm-rfcs/pull/46#discussion_r806455336 ## File path: rfcs/0046-module-based-model-runtime-for-aot.md ## @@ -0,0 +1,348 @@ +# Module-based Model Runtime Interface for AOT + +- Feature Name: module_based_model_runtime_for_aot +- Start Date: 2021-09-17 +- RFC PR: [apache/tvm-rfcs#0046](https://github.com/apache/tvm-rfcs/pull/0046) +- GitHub Issue: [apache/tvm#](https://github.com/apache/tvm/issues/) + +# **Summary** + +This RFC describes a [Module-based Model Runtime +interface](https://discuss.tvm.apache.org/t/discuss-module-based-model-runtime-interface/5025) for +the [Ahead-of-Time Executor](https://discuss.tvm.apache.org/t/implementing-aot-in-tvm/9206), thereby +enabling its use from the TVM C++ Runtime. + +# **Motivation** + +The microTVM project has made significant progress towards an Ahead-of-Time Executor for compiled +Relay models. At the time of writing, it's now possible to codegen a TIR function which executes +Relay models that have known shapes, don't have graph-level control flow, and execute only on the +CPU device. Right now, the C runtime is the only such runtime environment which can interact with +this generated code. However, significant interest exists in enabling the C++ runtime to use the +Ahead-of-Time executor. + +# **Guide-level explanation** + +Users select the AOT executor at compile time through the traditional GraphExecutor compilation flow +(e.g. `[tvm.relay.build](http://tvm.relay.build)`) by including `--executor=aot` in the Target +[1]. The return value of `tvm.relay.build` in this case is an `AotExecutorFactory` Module +object. Users instantiate the AOT executor via `AotExecutorFactory` as they do with `GraphExecutor`: + +```bash +ir_mod = tvm.parser.fromtext("""\ + #[version = "0.0.5"] + def @main(%a : Tensor[(1, 2), uint8], %b : Tensor[(1, 2), uint8]) { + %0 = %a + %b; + %0 + }""" +) + +with PassConfig(opt_level=3): + factory : AotExecutorFactory = tvm.relay.build( + ir_mod, "llvm -executor=aot", module_name="my_mod") + +aot_executor : AotExecutor = factory["my_mod"](tvm.cpu(0)) +``` + +`AotExecutor` supports the traditional Module-Based Model Runtime Interface and can be used as a +user normally would `GraphExecutor`: + +```bash +aot_executor.set_input("a", tvm.nd.array(np.ndarray([1, 2], dtype="uint8"))) +aot_executor.set_input("b", tvm.nd.array(np.ndarray([3, 5], dtype="uint8"))) +aot_exec.run() +output = aot_exec.get_output(0) +assert output.asnumpy() == np.ndarray([5, 7], dtype="uint8") +``` + +[1] NOTE: The target string is not the final place this customization should be made. However, it's +been the place where we've been putting runtime-related stuff. A separate RFC will split the Target +string into Target options (which affect tuning) and runtime options. + +# **Reference-level explanation** + +Already committed to TVM is the AotExecutorCodegen. This module produces a TIR top-level function +which invokes the Relay operators (implemented in TIR) in a correct order. An example is given +below: + +```bash +PrimFunc([input1, input2, output]) attrs={"global_symbol": "tvmgen_my_mod_run_model", "runner_function": (bool)1} { + // attr [(nullptr)] device_id = 0 + // attr [(nullptr)] device_type = 1 + tir.tvm_call_packed("tvmgen_my_mod_fused_add", input1, input2, output) +} +``` + +The AotExecutor then needs to accomplish the following to meet Module-based Model Runtime Interface: + +1. Allocate input and output tensors as defined in the `run_model` function using the correct Device Review comment: ah i see. yeah this makes sense. i'm wary of introducing too much complexity here particularly when user/platform intervention may be required to implement the double-buffer (e.g. if DMA is used to fill the buffer while the SoC is sleeping). it would be great to continue discussing this in a follow-on! ## File path: rfcs/0046-module-based-model-runtime-for-aot.md ## @@ -0,0 +1,348 @@ +# Module-based Model Runtime Interface for AOT + +- Feature Name: module_based_model_runtime_for_aot +- Start Date: 2021-09-17 +- RFC PR: [apache/tvm-rfcs#0046](https://github.com/apache/tvm-rfcs/pull/0046) +- GitHub Issue: [apache/tvm#](https://github.com/apache/tvm/issues/) + +# **Summary** + +This RFC describes a [Module-based Model Runtime +interface](https://discuss.tvm.apache.org/t/discuss-module-based-model-runtime-interface/5025) for +the [Ahead-of-Time Executor](https://discuss.tvm.apache.org/t/implementing-aot-in-tvm/9206), thereby +enabling its use from the TVM C++ Runtime. + +# **Motivation** + +The microTVM project has made significant progress towards an Ahead-of-Time Executor for compiled +Relay models. At the time of writing, it's now possible to codegen a TIR function which executes +Relay models that have known shapes, don't have graph-level control flow, and
[GitHub] [tvm-rfcs] areusch commented on a change in pull request #46: Module Based Model Runtime for AOT
areusch commented on a change in pull request #46: URL: https://github.com/apache/tvm-rfcs/pull/46#discussion_r807136345 ## File path: rfcs/0046-module-based-model-runtime-for-aot.md ## @@ -0,0 +1,348 @@ +# Module-based Model Runtime Interface for AOT + +- Feature Name: module_based_model_runtime_for_aot +- Start Date: 2021-09-17 +- RFC PR: [apache/tvm-rfcs#0046](https://github.com/apache/tvm-rfcs/pull/0046) +- GitHub Issue: [apache/tvm#](https://github.com/apache/tvm/issues/) + +# **Summary** + +This RFC describes a [Module-based Model Runtime +interface](https://discuss.tvm.apache.org/t/discuss-module-based-model-runtime-interface/5025) for +the [Ahead-of-Time Executor](https://discuss.tvm.apache.org/t/implementing-aot-in-tvm/9206), thereby +enabling its use from the TVM C++ Runtime. + +# **Motivation** + +The microTVM project has made significant progress towards an Ahead-of-Time Executor for compiled +Relay models. At the time of writing, it's now possible to codegen a TIR function which executes +Relay models that have known shapes, don't have graph-level control flow, and execute only on the +CPU device. Right now, the C runtime is the only such runtime environment which can interact with +this generated code. However, significant interest exists in enabling the C++ runtime to use the +Ahead-of-Time executor. + +# **Guide-level explanation** + +Users select the AOT executor at compile time through the traditional GraphExecutor compilation flow +(e.g. `[tvm.relay.build](http://tvm.relay.build)`) by including `--executor=aot` in the Target +[1]. The return value of `tvm.relay.build` in this case is an `AotExecutorFactory` Module +object. Users instantiate the AOT executor via `AotExecutorFactory` as they do with `GraphExecutor`: + +```bash +ir_mod = tvm.parser.fromtext("""\ + #[version = "0.0.5"] + def @main(%a : Tensor[(1, 2), uint8], %b : Tensor[(1, 2), uint8]) { + %0 = %a + %b; + %0 + }""" +) + +with PassConfig(opt_level=3): + factory : AotExecutorFactory = tvm.relay.build( + ir_mod, "llvm -executor=aot", module_name="my_mod") + +aot_executor : AotExecutor = factory["my_mod"](tvm.cpu(0)) +``` + +`AotExecutor` supports the traditional Module-Based Model Runtime Interface and can be used as a +user normally would `GraphExecutor`: + +```bash +aot_executor.set_input("a", tvm.nd.array(np.ndarray([1, 2], dtype="uint8"))) +aot_executor.set_input("b", tvm.nd.array(np.ndarray([3, 5], dtype="uint8"))) +aot_exec.run() +output = aot_exec.get_output(0) +assert output.asnumpy() == np.ndarray([5, 7], dtype="uint8") +``` + +[1] NOTE: The target string is not the final place this customization should be made. However, it's +been the place where we've been putting runtime-related stuff. A separate RFC will split the Target +string into Target options (which affect tuning) and runtime options. + +# **Reference-level explanation** + +Already committed to TVM is the AotExecutorCodegen. This module produces a TIR top-level function +which invokes the Relay operators (implemented in TIR) in a correct order. An example is given +below: + +```bash +PrimFunc([input1, input2, output]) attrs={"global_symbol": "tvmgen_my_mod_run_model", "runner_function": (bool)1} { + // attr [(nullptr)] device_id = 0 + // attr [(nullptr)] device_type = 1 + tir.tvm_call_packed("tvmgen_my_mod_fused_add", input1, input2, output) +} +``` + +The AotExecutor then needs to accomplish the following to meet Module-based Model Runtime Interface: + +1. Allocate input and output tensors as defined in the `run_model` function using the correct Device + API. +2. Provide a mapping from relay parameter name to positional argument. +3. Invoke the generated TIR function and provide profiling. + +### Compiler ↔ Runtime Metadata + +In order to implement (1) and (2) above, additional metadata about the `run_model` function needs to +be communicated from Compiler to Runtime: + +- The mapping between Relay parameter name and TIR argument position +- The number of inputs and outputs +- The type of each parameter +- Information sufficient to choose a Device API to allocate memory for that data. + +At present, Metadata is passed from Compiler to Runtime in several different ways: + +1. Constant DLTensor can be bundled with code and supplied to `runtime::Module` via + `runtime::MetadataModule` +2. Many non-DSO-exportable backends (`cuda`, `hexagon`, `metal`, `opencl`, `sdaccel`, `rocm`, + `vulkan`) have adopted the convention of including a + [1runtime::FunctionInfo`](https://github.com/apache/tvm/blob/main/src/runtime/meta_data.h#L106) + (NOTE: distinct from `tvm::relay::transform::FunctionInfo`) in their serialization: + +```bash +/*! \brief function information needed by device */ +struct FunctionInfo { + std::string name; + std::vector arg_types; + std::vector launch_param_tags; +} +``` + +3. AotExecutorCodegen and
[GitHub] [tvm-rfcs] areusch commented on a change in pull request #46: Module Based Model Runtime for AOT
areusch commented on a change in pull request #46: URL: https://github.com/apache/tvm-rfcs/pull/46#discussion_r806465372 ## File path: rfcs/0046-module-based-model-runtime-for-aot.md ## @@ -0,0 +1,348 @@ +# Module-based Model Runtime Interface for AOT + +- Feature Name: module_based_model_runtime_for_aot +- Start Date: 2021-09-17 +- RFC PR: [apache/tvm-rfcs#0046](https://github.com/apache/tvm-rfcs/pull/0046) +- GitHub Issue: [apache/tvm#](https://github.com/apache/tvm/issues/) + +# **Summary** + +This RFC describes a [Module-based Model Runtime +interface](https://discuss.tvm.apache.org/t/discuss-module-based-model-runtime-interface/5025) for +the [Ahead-of-Time Executor](https://discuss.tvm.apache.org/t/implementing-aot-in-tvm/9206), thereby +enabling its use from the TVM C++ Runtime. + +# **Motivation** + +The microTVM project has made significant progress towards an Ahead-of-Time Executor for compiled +Relay models. At the time of writing, it's now possible to codegen a TIR function which executes +Relay models that have known shapes, don't have graph-level control flow, and execute only on the +CPU device. Right now, the C runtime is the only such runtime environment which can interact with +this generated code. However, significant interest exists in enabling the C++ runtime to use the +Ahead-of-Time executor. + +# **Guide-level explanation** + +Users select the AOT executor at compile time through the traditional GraphExecutor compilation flow +(e.g. `[tvm.relay.build](http://tvm.relay.build)`) by including `--executor=aot` in the Target +[1]. The return value of `tvm.relay.build` in this case is an `AotExecutorFactory` Module +object. Users instantiate the AOT executor via `AotExecutorFactory` as they do with `GraphExecutor`: + +```bash +ir_mod = tvm.parser.fromtext("""\ + #[version = "0.0.5"] + def @main(%a : Tensor[(1, 2), uint8], %b : Tensor[(1, 2), uint8]) { + %0 = %a + %b; + %0 + }""" +) + +with PassConfig(opt_level=3): + factory : AotExecutorFactory = tvm.relay.build( + ir_mod, "llvm -executor=aot", module_name="my_mod") + +aot_executor : AotExecutor = factory["my_mod"](tvm.cpu(0)) +``` + +`AotExecutor` supports the traditional Module-Based Model Runtime Interface and can be used as a +user normally would `GraphExecutor`: + +```bash +aot_executor.set_input("a", tvm.nd.array(np.ndarray([1, 2], dtype="uint8"))) +aot_executor.set_input("b", tvm.nd.array(np.ndarray([3, 5], dtype="uint8"))) +aot_exec.run() +output = aot_exec.get_output(0) +assert output.asnumpy() == np.ndarray([5, 7], dtype="uint8") +``` + +[1] NOTE: The target string is not the final place this customization should be made. However, it's +been the place where we've been putting runtime-related stuff. A separate RFC will split the Target +string into Target options (which affect tuning) and runtime options. + +# **Reference-level explanation** + +Already committed to TVM is the AotExecutorCodegen. This module produces a TIR top-level function +which invokes the Relay operators (implemented in TIR) in a correct order. An example is given +below: + +```bash +PrimFunc([input1, input2, output]) attrs={"global_symbol": "tvmgen_my_mod_run_model", "runner_function": (bool)1} { + // attr [(nullptr)] device_id = 0 + // attr [(nullptr)] device_type = 1 + tir.tvm_call_packed("tvmgen_my_mod_fused_add", input1, input2, output) +} +``` + +The AotExecutor then needs to accomplish the following to meet Module-based Model Runtime Interface: + +1. Allocate input and output tensors as defined in the `run_model` function using the correct Device + API. +2. Provide a mapping from relay parameter name to positional argument. +3. Invoke the generated TIR function and provide profiling. + +### Compiler ↔ Runtime Metadata + +In order to implement (1) and (2) above, additional metadata about the `run_model` function needs to +be communicated from Compiler to Runtime: + +- The mapping between Relay parameter name and TIR argument position +- The number of inputs and outputs +- The type of each parameter +- Information sufficient to choose a Device API to allocate memory for that data. + +At present, Metadata is passed from Compiler to Runtime in several different ways: + +1. Constant DLTensor can be bundled with code and supplied to `runtime::Module` via + `runtime::MetadataModule` +2. Many non-DSO-exportable backends (`cuda`, `hexagon`, `metal`, `opencl`, `sdaccel`, `rocm`, + `vulkan`) have adopted the convention of including a + [1runtime::FunctionInfo`](https://github.com/apache/tvm/blob/main/src/runtime/meta_data.h#L106) + (NOTE: distinct from `tvm::relay::transform::FunctionInfo`) in their serialization: + +```bash +/*! \brief function information needed by device */ +struct FunctionInfo { + std::string name; + std::vector arg_types; + std::vector launch_param_tags; +} +``` + +3. AotExecutorCodegen and
[GitHub] [tvm-rfcs] areusch commented on a change in pull request #46: Module Based Model Runtime for AOT
areusch commented on a change in pull request #46: URL: https://github.com/apache/tvm-rfcs/pull/46#discussion_r806465372 ## File path: rfcs/0046-module-based-model-runtime-for-aot.md ## @@ -0,0 +1,348 @@ +# Module-based Model Runtime Interface for AOT + +- Feature Name: module_based_model_runtime_for_aot +- Start Date: 2021-09-17 +- RFC PR: [apache/tvm-rfcs#0046](https://github.com/apache/tvm-rfcs/pull/0046) +- GitHub Issue: [apache/tvm#](https://github.com/apache/tvm/issues/) + +# **Summary** + +This RFC describes a [Module-based Model Runtime +interface](https://discuss.tvm.apache.org/t/discuss-module-based-model-runtime-interface/5025) for +the [Ahead-of-Time Executor](https://discuss.tvm.apache.org/t/implementing-aot-in-tvm/9206), thereby +enabling its use from the TVM C++ Runtime. + +# **Motivation** + +The microTVM project has made significant progress towards an Ahead-of-Time Executor for compiled +Relay models. At the time of writing, it's now possible to codegen a TIR function which executes +Relay models that have known shapes, don't have graph-level control flow, and execute only on the +CPU device. Right now, the C runtime is the only such runtime environment which can interact with +this generated code. However, significant interest exists in enabling the C++ runtime to use the +Ahead-of-Time executor. + +# **Guide-level explanation** + +Users select the AOT executor at compile time through the traditional GraphExecutor compilation flow +(e.g. `[tvm.relay.build](http://tvm.relay.build)`) by including `--executor=aot` in the Target +[1]. The return value of `tvm.relay.build` in this case is an `AotExecutorFactory` Module +object. Users instantiate the AOT executor via `AotExecutorFactory` as they do with `GraphExecutor`: + +```bash +ir_mod = tvm.parser.fromtext("""\ + #[version = "0.0.5"] + def @main(%a : Tensor[(1, 2), uint8], %b : Tensor[(1, 2), uint8]) { + %0 = %a + %b; + %0 + }""" +) + +with PassConfig(opt_level=3): + factory : AotExecutorFactory = tvm.relay.build( + ir_mod, "llvm -executor=aot", module_name="my_mod") + +aot_executor : AotExecutor = factory["my_mod"](tvm.cpu(0)) +``` + +`AotExecutor` supports the traditional Module-Based Model Runtime Interface and can be used as a +user normally would `GraphExecutor`: + +```bash +aot_executor.set_input("a", tvm.nd.array(np.ndarray([1, 2], dtype="uint8"))) +aot_executor.set_input("b", tvm.nd.array(np.ndarray([3, 5], dtype="uint8"))) +aot_exec.run() +output = aot_exec.get_output(0) +assert output.asnumpy() == np.ndarray([5, 7], dtype="uint8") +``` + +[1] NOTE: The target string is not the final place this customization should be made. However, it's +been the place where we've been putting runtime-related stuff. A separate RFC will split the Target +string into Target options (which affect tuning) and runtime options. + +# **Reference-level explanation** + +Already committed to TVM is the AotExecutorCodegen. This module produces a TIR top-level function +which invokes the Relay operators (implemented in TIR) in a correct order. An example is given +below: + +```bash +PrimFunc([input1, input2, output]) attrs={"global_symbol": "tvmgen_my_mod_run_model", "runner_function": (bool)1} { + // attr [(nullptr)] device_id = 0 + // attr [(nullptr)] device_type = 1 + tir.tvm_call_packed("tvmgen_my_mod_fused_add", input1, input2, output) +} +``` + +The AotExecutor then needs to accomplish the following to meet Module-based Model Runtime Interface: + +1. Allocate input and output tensors as defined in the `run_model` function using the correct Device + API. +2. Provide a mapping from relay parameter name to positional argument. +3. Invoke the generated TIR function and provide profiling. + +### Compiler ↔ Runtime Metadata + +In order to implement (1) and (2) above, additional metadata about the `run_model` function needs to +be communicated from Compiler to Runtime: + +- The mapping between Relay parameter name and TIR argument position +- The number of inputs and outputs +- The type of each parameter +- Information sufficient to choose a Device API to allocate memory for that data. + +At present, Metadata is passed from Compiler to Runtime in several different ways: + +1. Constant DLTensor can be bundled with code and supplied to `runtime::Module` via + `runtime::MetadataModule` +2. Many non-DSO-exportable backends (`cuda`, `hexagon`, `metal`, `opencl`, `sdaccel`, `rocm`, + `vulkan`) have adopted the convention of including a + [1runtime::FunctionInfo`](https://github.com/apache/tvm/blob/main/src/runtime/meta_data.h#L106) + (NOTE: distinct from `tvm::relay::transform::FunctionInfo`) in their serialization: + +```bash +/*! \brief function information needed by device */ +struct FunctionInfo { + std::string name; + std::vector arg_types; + std::vector launch_param_tags; +} +``` + +3. AotExecutorCodegen and
[GitHub] [tvm-rfcs] areusch commented on a change in pull request #46: Module Based Model Runtime for AOT
areusch commented on a change in pull request #46: URL: https://github.com/apache/tvm-rfcs/pull/46#discussion_r806455336 ## File path: rfcs/0046-module-based-model-runtime-for-aot.md ## @@ -0,0 +1,348 @@ +# Module-based Model Runtime Interface for AOT + +- Feature Name: module_based_model_runtime_for_aot +- Start Date: 2021-09-17 +- RFC PR: [apache/tvm-rfcs#0046](https://github.com/apache/tvm-rfcs/pull/0046) +- GitHub Issue: [apache/tvm#](https://github.com/apache/tvm/issues/) + +# **Summary** + +This RFC describes a [Module-based Model Runtime +interface](https://discuss.tvm.apache.org/t/discuss-module-based-model-runtime-interface/5025) for +the [Ahead-of-Time Executor](https://discuss.tvm.apache.org/t/implementing-aot-in-tvm/9206), thereby +enabling its use from the TVM C++ Runtime. + +# **Motivation** + +The microTVM project has made significant progress towards an Ahead-of-Time Executor for compiled +Relay models. At the time of writing, it's now possible to codegen a TIR function which executes +Relay models that have known shapes, don't have graph-level control flow, and execute only on the +CPU device. Right now, the C runtime is the only such runtime environment which can interact with +this generated code. However, significant interest exists in enabling the C++ runtime to use the +Ahead-of-Time executor. + +# **Guide-level explanation** + +Users select the AOT executor at compile time through the traditional GraphExecutor compilation flow +(e.g. `[tvm.relay.build](http://tvm.relay.build)`) by including `--executor=aot` in the Target +[1]. The return value of `tvm.relay.build` in this case is an `AotExecutorFactory` Module +object. Users instantiate the AOT executor via `AotExecutorFactory` as they do with `GraphExecutor`: + +```bash +ir_mod = tvm.parser.fromtext("""\ + #[version = "0.0.5"] + def @main(%a : Tensor[(1, 2), uint8], %b : Tensor[(1, 2), uint8]) { + %0 = %a + %b; + %0 + }""" +) + +with PassConfig(opt_level=3): + factory : AotExecutorFactory = tvm.relay.build( + ir_mod, "llvm -executor=aot", module_name="my_mod") + +aot_executor : AotExecutor = factory["my_mod"](tvm.cpu(0)) +``` + +`AotExecutor` supports the traditional Module-Based Model Runtime Interface and can be used as a +user normally would `GraphExecutor`: + +```bash +aot_executor.set_input("a", tvm.nd.array(np.ndarray([1, 2], dtype="uint8"))) +aot_executor.set_input("b", tvm.nd.array(np.ndarray([3, 5], dtype="uint8"))) +aot_exec.run() +output = aot_exec.get_output(0) +assert output.asnumpy() == np.ndarray([5, 7], dtype="uint8") +``` + +[1] NOTE: The target string is not the final place this customization should be made. However, it's +been the place where we've been putting runtime-related stuff. A separate RFC will split the Target +string into Target options (which affect tuning) and runtime options. + +# **Reference-level explanation** + +Already committed to TVM is the AotExecutorCodegen. This module produces a TIR top-level function +which invokes the Relay operators (implemented in TIR) in a correct order. An example is given +below: + +```bash +PrimFunc([input1, input2, output]) attrs={"global_symbol": "tvmgen_my_mod_run_model", "runner_function": (bool)1} { + // attr [(nullptr)] device_id = 0 + // attr [(nullptr)] device_type = 1 + tir.tvm_call_packed("tvmgen_my_mod_fused_add", input1, input2, output) +} +``` + +The AotExecutor then needs to accomplish the following to meet Module-based Model Runtime Interface: + +1. Allocate input and output tensors as defined in the `run_model` function using the correct Device Review comment: ah i see. yeah this makes sense. i'm wary of introducing too much complexity here particularly when user/platform intervention may be required to implement the double-buffer (e.g. if DMA is used to fill the buffer while the SoC is sleeping). it would be great to continue discussing this in a follow-on! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm-rfcs] areusch commented on a change in pull request #46: Module Based Model Runtime for AOT
areusch commented on a change in pull request #46: URL: https://github.com/apache/tvm-rfcs/pull/46#discussion_r802025838 ## File path: rfcs/0046-module-based-model-runtime-for-aot.md ## @@ -0,0 +1,348 @@ +# Module-based Model Runtime Interface for AOT + +- Feature Name: module_based_model_runtime_for_aot +- Start Date: 2021-09-17 +- RFC PR: [apache/tvm-rfcs#0046](https://github.com/apache/tvm-rfcs/pull/0046) +- GitHub Issue: [apache/tvm#](https://github.com/apache/tvm/issues/) + +# **Summary** + +This RFC describes a [Module-based Model Runtime +interface](https://discuss.tvm.apache.org/t/discuss-module-based-model-runtime-interface/5025) for +the [Ahead-of-Time Executor](https://discuss.tvm.apache.org/t/implementing-aot-in-tvm/9206), thereby +enabling its use from the TVM C++ Runtime. + +# **Motivation** + +The microTVM project has made significant progress towards an Ahead-of-Time Executor for compiled +Relay models. At the time of writing, it's now possible to codegen a TIR function which executes +Relay models that have known shapes, don't have graph-level control flow, and execute only on the +CPU device. Right now, the C runtime is the only such runtime environment which can interact with +this generated code. However, significant interest exists in enabling the C++ runtime to use the +Ahead-of-Time executor. + +# **Guide-level explanation** + +Users select the AOT executor at compile time through the traditional GraphExecutor compilation flow +(e.g. `[tvm.relay.build](http://tvm.relay.build)`) by including `--executor=aot` in the Target +[1]. The return value of `tvm.relay.build` in this case is an `AotExecutorFactory` Module +object. Users instantiate the AOT executor via `AotExecutorFactory` as they do with `GraphExecutor`: + +```bash +ir_mod = tvm.parser.fromtext("""\ + #[version = "0.0.5"] + def @main(%a : Tensor[(1, 2), uint8], %b : Tensor[(1, 2), uint8]) { + %0 = %a + %b; + %0 + }""" +) + +with PassConfig(opt_level=3): + factory : AotExecutorFactory = tvm.relay.build( + ir_mod, "llvm -executor=aot", module_name="my_mod") + +aot_executor : AotExecutor = factory["my_mod"](tvm.cpu(0)) +``` + +`AotExecutor` supports the traditional Module-Based Model Runtime Interface and can be used as a +user normally would `GraphExecutor`: + +```bash +aot_executor.set_input("a", tvm.nd.array(np.ndarray([1, 2], dtype="uint8"))) +aot_executor.set_input("b", tvm.nd.array(np.ndarray([3, 5], dtype="uint8"))) +aot_exec.run() +output = aot_exec.get_output(0) +assert output.asnumpy() == np.ndarray([5, 7], dtype="uint8") +``` + +[1] NOTE: The target string is not the final place this customization should be made. However, it's +been the place where we've been putting runtime-related stuff. A separate RFC will split the Target +string into Target options (which affect tuning) and runtime options. + +# **Reference-level explanation** + +Already committed to TVM is the AotExecutorCodegen. This module produces a TIR top-level function +which invokes the Relay operators (implemented in TIR) in a correct order. An example is given +below: + +```bash +PrimFunc([input1, input2, output]) attrs={"global_symbol": "tvmgen_my_mod_run_model", "runner_function": (bool)1} { + // attr [(nullptr)] device_id = 0 + // attr [(nullptr)] device_type = 1 + tir.tvm_call_packed("tvmgen_my_mod_fused_add", input1, input2, output) +} +``` + +The AotExecutor then needs to accomplish the following to meet Module-based Model Runtime Interface: + +1. Allocate input and output tensors as defined in the `run_model` function using the correct Device Review comment: > I suppose the proposal here is we create a runtime wrapper around it -- which also works but with cons of not exposing these allocations to the core compiler for further optimization. I actually think we should defer the question of how the input tensors are loaded and consumed to the runtime. You're right that AOTExecutor here is intended to be a generic runtime wrapper when there are enough resources on the system to handle this at runtime. However, consider a microTVM use case where DMA is used to copy data from e.g. a camera into an SRAM dedicated to accelerator usage. In this case, TVM should really do as much as possible to stay out of the way--the copy operation is application- or at least SoC- specific. TVM should provide pointer and sizing information so this can be handled separately. That argues for exposing some type of `get_input_tensor` function as you're mentioning here as a first-class citizen of MBMR. I think we should take that up as we move on the C Device API and USMP integrations. > However, Im curious to know whether it would just easier to create a copy in the main body to tir.allocate node that get translated to a device copy -- which I think has the same effect. I think I see what you're raising here--we have to explicitly inject
[GitHub] [tvm-rfcs] areusch commented on a change in pull request #46: Module Based Model Runtime for AOT
areusch commented on a change in pull request #46: URL: https://github.com/apache/tvm-rfcs/pull/46#discussion_r799099853 ## File path: rfcs/0046-module-based-model-runtime-for-aot.md ## @@ -0,0 +1,348 @@ +# Module-based Model Runtime Interface for AOT + +- Feature Name: module_based_model_runtime_for_aot +- Start Date: 2021-09-17 +- RFC PR: [apache/tvm-rfcs#0046](https://github.com/apache/tvm-rfcs/pull/0046) +- GitHub Issue: [apache/tvm#](https://github.com/apache/tvm/issues/) + +# **Summary** + +This RFC describes a [Module-based Model Runtime +interface](https://discuss.tvm.apache.org/t/discuss-module-based-model-runtime-interface/5025) for +the [Ahead-of-Time Executor](https://discuss.tvm.apache.org/t/implementing-aot-in-tvm/9206), thereby +enabling its use from the TVM C++ Runtime. + +# **Motivation** + +The microTVM project has made significant progress towards an Ahead-of-Time Executor for compiled +Relay models. At the time of writing, it's now possible to codegen a TIR function which executes +Relay models that have known shapes, don't have graph-level control flow, and execute only on the +CPU device. Right now, the C runtime is the only such runtime environment which can interact with +this generated code. However, significant interest exists in enabling the C++ runtime to use the +Ahead-of-Time executor. + +# **Guide-level explanation** + +Users select the AOT executor at compile time through the traditional GraphExecutor compilation flow +(e.g. `[tvm.relay.build](http://tvm.relay.build)`) by including `--executor=aot` in the Target +[1]. The return value of `tvm.relay.build` in this case is an `AotExecutorFactory` Module +object. Users instantiate the AOT executor via `AotExecutorFactory` as they do with `GraphExecutor`: + +```bash +ir_mod = tvm.parser.fromtext("""\ + #[version = "0.0.5"] + def @main(%a : Tensor[(1, 2), uint8], %b : Tensor[(1, 2), uint8]) { + %0 = %a + %b; + %0 + }""" +) + +with PassConfig(opt_level=3): + factory : AotExecutorFactory = tvm.relay.build( + ir_mod, "llvm -executor=aot", module_name="my_mod") + +aot_executor : AotExecutor = factory["my_mod"](tvm.cpu(0)) +``` + +`AotExecutor` supports the traditional Module-Based Model Runtime Interface and can be used as a +user normally would `GraphExecutor`: + +```bash +aot_executor.set_input("a", tvm.nd.array(np.ndarray([1, 2], dtype="uint8"))) +aot_executor.set_input("b", tvm.nd.array(np.ndarray([3, 5], dtype="uint8"))) +aot_exec.run() +output = aot_exec.get_output(0) +assert output.asnumpy() == np.ndarray([5, 7], dtype="uint8") +``` + +[1] NOTE: The target string is not the final place this customization should be made. However, it's +been the place where we've been putting runtime-related stuff. A separate RFC will split the Target +string into Target options (which affect tuning) and runtime options. + +# **Reference-level explanation** + +Already committed to TVM is the AotExecutorCodegen. This module produces a TIR top-level function +which invokes the Relay operators (implemented in TIR) in a correct order. An example is given +below: + +```bash +PrimFunc([input1, input2, output]) attrs={"global_symbol": "tvmgen_my_mod_run_model", "runner_function": (bool)1} { + // attr [(nullptr)] device_id = 0 + // attr [(nullptr)] device_type = 1 + tir.tvm_call_packed("tvmgen_my_mod_fused_add", input1, input2, output) +} +``` + +The AotExecutor then needs to accomplish the following to meet Module-based Model Runtime Interface: + +1. Allocate input and output tensors as defined in the `run_model` function using the correct Device Review comment: you mean that AOTExecutorCodegen does not emit information to determine the device, correct? I think right now it does in the sense that it hardcodes the device to (kDLCPU, 0) wherever it uses that, right? here I'm not suggesting we modify the codegen to create tir.allocate nodes in the body of the emitted function. I'm only proposing that we emit metadata that contains shape, dtype, and device information for the expected inputs and outputs from that main func, and then defer allocation to either a runtime component (in the case of the C++ runtime) or a compile-time component (if used with the C runtime and microTVM). The runtime changes proposed here align with E1. We could also implement `set_input_zero_copy` to align with E2. But again these are runtime changes which require that additional output from AOTExecutorCodegen to work properly. ## File path: rfcs/0046-module-based-model-runtime-for-aot.md ## @@ -0,0 +1,348 @@ +# Module-based Model Runtime Interface for AOT + +- Feature Name: module_based_model_runtime_for_aot +- Start Date: 2021-09-17 +- RFC PR: [apache/tvm-rfcs#0046](https://github.com/apache/tvm-rfcs/pull/0046) +- GitHub Issue: [apache/tvm#](https://github.com/apache/tvm/issues/) + +# **Summary** + +This RFC describes a