comaniac commented on a change in pull request #14:
URL: https://github.com/apache/tvm-rfcs/pull/14#discussion_r690561308



##########
File path: rfcs/0012-pipeline-executor.md
##########
@@ -0,0 +1,235 @@
+<!--- Licensed to the Apache Software Foundation (ASF) under one -->
+<!--- or more contributor license agreements.  See the NOTICE file -->
+<!--- distributed with this work for additional information -->
+<!--- regarding copyright ownership.  The ASF licenses this file -->
+<!--- to you under the Apache License, Version 2.0 (the -->
+<!--- "License"); you may not use this file except in compliance -->
+<!--- with the License.  You may obtain a copy of the License at -->
+
+<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
+
+<!--- Unless required by applicable law or agreed to in writing, -->
+<!--- software distributed under the License is distributed on an -->
+<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
+<!--- KIND, either express or implied.  See the License for the -->
+<!--- specific language governing permissions and limitations -->
+<!--- under the License. -->
+- Feature Name: Pipeline Executor
+- Start Date: 2021-07-30
+- RFC PR: [apache/tvm-rfcs#0014](https://github.com/apache/tvm-rfcs/pull/0014)
+- GitHub Issue: [apache/tvm#8596](https://github.com/apache/tvm/issues/8596)
+
+## 1. Summary
+
+
+This proposal introduces Pipeline Executor: A runtime executor that schedules
+a list of Relay modules in pipeline to achieve task level parallelism to 
improve
+computation throughput.
+
+## 2. Motivation
+
+
+
+Currently more and more edge device inference deployments happen on SOC 
devices.
+Since SOC devices have heterogeneous chipset like GPU, FPGA, CPU, DSP, etc. To 
reach the best
+performance, there is a requirement to run an ML network in these 
heterogeneous chipsets.
+However, currently graph executor does not have parallelism logic, and the 
existing data parallelism
+solution only supports parallel on homogeneous chipset(device). Then, the only 
way to do batch processing
+on heterogeneous devices with TVM is to treat a whole ML network as a schedule 
unit and run it on
+different heterogeneous devices, but that would cause latency issue (low speed 
chipset becomes the
+latency bottleneck for single data processing).
+
+Therefore, we need a runtime executor that can provide parallel scheduling 
functionality
+with a finer-grained schedule unit like subgraph (a group of operator with 
dependency relation)
+to be more efficient to use SOC heterogeneous hardware resource to achieve a 
better performance.
+
+
+### Benefits of Pipeline Executor
+
+There are three benefits for Pipeline Executor
+
+Pipeline Executor provides:
+* Compute a single network on multiple backends in parallel to improve 
performance.
+
+* Use RPC to perform distributed computation cross multiple remote devices.
+
+* Pipeline executor provide the capability to integrate non-DNN model function.
+
+## 3. Guide-level explanation
+Pipeline Executor is a runtime executor which implements pipeline execution 
logic for multiple
+subgraphs and relies on graph_executor for operator storage and execution.
+
+This section introduces the use case for Pipeline Executor.
+
+* 1. Manually Split relay module a list relay modules and generate modules 
configuration.
+* 2. Use pipeline_executor to build a pipeline module with the subgraphs and 
configuration.
+* 3. Use pipeline_executor to load the pipeline module to run network in 
pipeline parallelism mode.
+
+### 3.1. Manually Split relay module a list relay modules and generate modules 
configuration.
+
+```python
+
+mod1, mod2, mod3 = my_manual_partitioner(mod)
+pipe_cfg = PipelineModuleConfig()
+
+# Define pipeline inputs. Here I assume two inputs of mod1 and one input of 
mod3 are the pipeline inputs.
+pipe_cfg.inputs["data_0"] = (mod1, "data_0")
+pipe_cfg.inputs["data_1"] = (mod1, "data_1")
+pipe_cfg.inputs["data_2"] = (mod3, "data_0")
+
+# Define pipeline outputs to be the first output of mod3.
+pipe_cfg.outputs.append((mod3, 0))
+
+# Define connections.
+pipe_cfg.connect(mod1, 0, mod2, "data_0") # mod1.output(0) -> mod2.data_0
+pipe_cfg.connect(mod2, 0, mod3, "data_1") # mod2.output(0) -> mod3.data_1
+
+# Print config for debugging
+print(str(pipe_cfg))
+# Inputs:
+#   |- data_0: mod1.data_0
+#   |- data_1: mod1.data_1
+#   |- data_2: mod3.data_0
+# Outputs:
+#   |- mod3.output(0)
+# Connections:
+#   |- mod1.output(0) -> mod2.data_0
+#   |- mod2.output(0) -> mod3.data_1
+
+
+```
+
+### 3.2. Use pipeline_executor to build pipeline module with the said subgraph 
and configuration.
+
+following is a build example

Review comment:
       ```suggestion
   The interface is mostly the same as the graph executor but accepts a 
pipeline configuration instead of a Relay module. Here is an example.
   ```

##########
File path: rfcs/0012-pipeline-executor.md
##########
@@ -0,0 +1,235 @@
+<!--- Licensed to the Apache Software Foundation (ASF) under one -->
+<!--- or more contributor license agreements.  See the NOTICE file -->
+<!--- distributed with this work for additional information -->
+<!--- regarding copyright ownership.  The ASF licenses this file -->
+<!--- to you under the Apache License, Version 2.0 (the -->
+<!--- "License"); you may not use this file except in compliance -->
+<!--- with the License.  You may obtain a copy of the License at -->
+
+<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
+
+<!--- Unless required by applicable law or agreed to in writing, -->
+<!--- software distributed under the License is distributed on an -->
+<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
+<!--- KIND, either express or implied.  See the License for the -->
+<!--- specific language governing permissions and limitations -->
+<!--- under the License. -->
+- Feature Name: Pipeline Executor
+- Start Date: 2021-07-30
+- RFC PR: [apache/tvm-rfcs#0014](https://github.com/apache/tvm-rfcs/pull/0014)
+- GitHub Issue: [apache/tvm#8596](https://github.com/apache/tvm/issues/8596)
+
+## 1. Summary
+
+
+This proposal introduces Pipeline Executor: A runtime executor that schedules
+a list of Relay modules in pipeline to achieve task level parallelism to 
improve
+computation throughput.
+
+## 2. Motivation
+
+
+
+Currently more and more edge device inference deployments happen on SOC 
devices.
+Since SOC devices have heterogeneous chipset like GPU, FPGA, CPU, DSP, etc. To 
reach the best
+performance, there is a requirement to run an ML network in these 
heterogeneous chipsets.
+However, currently graph executor does not have parallelism logic, and the 
existing data parallelism
+solution only supports parallel on homogeneous chipset(device). Then, the only 
way to do batch processing
+on heterogeneous devices with TVM is to treat a whole ML network as a schedule 
unit and run it on
+different heterogeneous devices, but that would cause latency issue (low speed 
chipset becomes the
+latency bottleneck for single data processing).
+
+Therefore, we need a runtime executor that can provide parallel scheduling 
functionality
+with a finer-grained schedule unit like subgraph (a group of operator with 
dependency relation)
+to be more efficient to use SOC heterogeneous hardware resource to achieve a 
better performance.
+
+
+### Benefits of Pipeline Executor
+
+There are three benefits for Pipeline Executor
+
+Pipeline Executor provides:
+* Compute a single network on multiple backends in parallel to improve 
performance.
+
+* Use RPC to perform distributed computation cross multiple remote devices.
+
+* Pipeline executor provide the capability to integrate non-DNN model function.
+
+## 3. Guide-level explanation
+Pipeline Executor is a runtime executor which implements pipeline execution 
logic for multiple
+subgraphs and relies on graph_executor for operator storage and execution.
+
+This section introduces the use case for Pipeline Executor.
+
+* 1. Manually Split relay module a list relay modules and generate modules 
configuration.

Review comment:
       ```suggestion
   * 1. Manually split/partition Relay module to a list of Relay modules and 
generate modules configuration (automatic module splitting is out of scope of 
this RFC and will be a future work).
   ```

##########
File path: rfcs/0012-pipeline-executor.md
##########
@@ -0,0 +1,235 @@
+<!--- Licensed to the Apache Software Foundation (ASF) under one -->
+<!--- or more contributor license agreements.  See the NOTICE file -->
+<!--- distributed with this work for additional information -->
+<!--- regarding copyright ownership.  The ASF licenses this file -->
+<!--- to you under the Apache License, Version 2.0 (the -->
+<!--- "License"); you may not use this file except in compliance -->
+<!--- with the License.  You may obtain a copy of the License at -->
+
+<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
+
+<!--- Unless required by applicable law or agreed to in writing, -->
+<!--- software distributed under the License is distributed on an -->
+<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
+<!--- KIND, either express or implied.  See the License for the -->
+<!--- specific language governing permissions and limitations -->
+<!--- under the License. -->
+- Feature Name: Pipeline Executor
+- Start Date: 2021-07-30
+- RFC PR: [apache/tvm-rfcs#0014](https://github.com/apache/tvm-rfcs/pull/0014)
+- GitHub Issue: [apache/tvm#8596](https://github.com/apache/tvm/issues/8596)
+
+## 1. Summary
+
+
+This proposal introduces Pipeline Executor: A runtime executor that schedules
+a list of Relay modules in pipeline to achieve task level parallelism to 
improve
+computation throughput.
+
+## 2. Motivation
+
+
+
+Currently more and more edge device inference deployments happen on SOC 
devices.
+Since SOC devices have heterogeneous chipset like GPU, FPGA, CPU, DSP, etc. To 
reach the best
+performance, there is a requirement to run an ML network in these 
heterogeneous chipsets.
+However, currently graph executor does not have parallelism logic, and the 
existing data parallelism
+solution only supports parallel on homogeneous chipset(device). Then, the only 
way to do batch processing
+on heterogeneous devices with TVM is to treat a whole ML network as a schedule 
unit and run it on
+different heterogeneous devices, but that would cause latency issue (low speed 
chipset becomes the
+latency bottleneck for single data processing).
+
+Therefore, we need a runtime executor that can provide parallel scheduling 
functionality
+with a finer-grained schedule unit like subgraph (a group of operator with 
dependency relation)
+to be more efficient to use SOC heterogeneous hardware resource to achieve a 
better performance.
+
+
+### Benefits of Pipeline Executor
+
+There are three benefits for Pipeline Executor
+
+Pipeline Executor provides:
+* Compute a single network on multiple backends in parallel to improve 
performance.
+
+* Use RPC to perform distributed computation cross multiple remote devices.
+
+* Pipeline executor provide the capability to integrate non-DNN model function.
+
+## 3. Guide-level explanation
+Pipeline Executor is a runtime executor which implements pipeline execution 
logic for multiple
+subgraphs and relies on graph_executor for operator storage and execution.
+
+This section introduces the use case for Pipeline Executor.
+
+* 1. Manually Split relay module a list relay modules and generate modules 
configuration.
+* 2. Use pipeline_executor to build a pipeline module with the subgraphs and 
configuration.
+* 3. Use pipeline_executor to load the pipeline module to run network in 
pipeline parallelism mode.
+
+### 3.1. Manually Split relay module a list relay modules and generate modules 
configuration.
+
+```python
+
+mod1, mod2, mod3 = my_manual_partitioner(mod)
+pipe_cfg = PipelineModuleConfig()
+
+# Define pipeline inputs. Here I assume two inputs of mod1 and one input of 
mod3 are the pipeline inputs.
+pipe_cfg.inputs["data_0"] = (mod1, "data_0")
+pipe_cfg.inputs["data_1"] = (mod1, "data_1")
+pipe_cfg.inputs["data_2"] = (mod3, "data_0")
+
+# Define pipeline outputs to be the first output of mod3.
+pipe_cfg.outputs.append((mod3, 0))
+
+# Define connections.
+pipe_cfg.connect(mod1, 0, mod2, "data_0") # mod1.output(0) -> mod2.data_0
+pipe_cfg.connect(mod2, 0, mod3, "data_1") # mod2.output(0) -> mod3.data_1
+
+# Print config for debugging
+print(str(pipe_cfg))
+# Inputs:
+#   |- data_0: mod1.data_0
+#   |- data_1: mod1.data_1
+#   |- data_2: mod3.data_0
+# Outputs:
+#   |- mod3.output(0)
+# Connections:
+#   |- mod1.output(0) -> mod2.data_0
+#   |- mod2.output(0) -> mod3.data_1
+
+
+```
+
+### 3.2. Use pipeline_executor to build pipeline module with the said subgraph 
and configuration.
+
+following is a build example
+
+```python
+
+# Use the config to build a pipeline executor
+with relay.build_config(opt_level=3):
+    lib = pipeline_executor.build_pipeline(pipe_cfg)
+
+```
+
+### 3.3. Use pipeline_executor to load pipeline module to run network in 
pipeline parallism mode.
+
+Pipeline executor works asynchronously. Unlike the graph executor that 
launches a task by calling a blocking
+`run` API, we can kick off a task by calling a non-blocking `set_input` API in 
pipeline executor:
+
+set_input: queue the input in the buffer.
+run: run with the input at the front.
+set_input: queue the input in the buffer.
+run: run with the input at the front.
+get_output
+set_input: queue the input in the buffer.
+run: run with the input at the front.
+get_output
+get_output
+
+`get_output` can be called anytime, and it will return an empty array if no 
output is ready.
+
+following is one example

Review comment:
       ```suggestion
   Pipeline executor works asynchronously. Unlike the blocking `run` API in 
graph executor,
   `run` API in pipeline executor is non-blocking. As a result, we could have 
the following scenario:
   
   1. set_input(): Push the input to the queue.
   2. run(): Launch a task with the first input in the queue.
   3. set_input(): Push the second input to the queue.
   4. set_input(): Push the third input to the queue.
   5. run(): Launch a task with the second input.
   6. get_output(): Get the output of the first input.
   7. run(): Launch a task with the third input. 
   8. get_output(): Get the output of the second input.
   9. get_output(): Get the output of the third input.
   
   As can be seen, `get_output()` can be called anytime to get the first 
available output in the result queue,
   and it will return an empty array if no output is ready.
   
   Following is one example:
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to