huajsj commented on a change in pull request #14:
URL: https://github.com/apache/tvm-rfcs/pull/14#discussion_r690031181



##########
File path: rfcs/0012-pipeline-executor.md
##########
@@ -0,0 +1,214 @@
+<!--- Licensed to the Apache Software Foundation (ASF) under one -->
+<!--- or more contributor license agreements.  See the NOTICE file -->
+<!--- distributed with this work for additional information -->
+<!--- regarding copyright ownership.  The ASF licenses this file -->
+<!--- to you under the Apache License, Version 2.0 (the -->
+<!--- "License"); you may not use this file except in compliance -->
+<!--- with the License.  You may obtain a copy of the License at -->
+
+<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
+
+<!--- Unless required by applicable law or agreed to in writing, -->
+<!--- software distributed under the License is distributed on an -->
+<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
+<!--- KIND, either express or implied.  See the License for the -->
+<!--- specific language governing permissions and limitations -->
+<!--- under the License. -->
+- Feature Name: Pipeline Executor
+- Start Date: 2021-07-30
+- RFC PR: [apache/tvm-rfcs#0014](https://github.com/apache/tvm-rfcs/pull/0014)
+- GitHub Issue: [apache/tvm#8596](https://github.com/apache/tvm/issues/8596)
+
+## 1. Summary
+
+
+This proposal introduces Pipeline Executor: A runtime executor that by 
scheduling
+splitted subgraph of relay graph in pipeline to implement task level parallism 
to
+improve compute throughput.
+
+## 2. Motivation
+
+
+
+Currently more and more edge device inference deployments happen on SOC 
devices.
+Since SOC devices have heterogeneous chipset like GPU, FPGA, CPU, DSP, etc. To 
reach the best
+performance, there is a requirement to run an ML network in these 
heterogeneous chipsets.
+However, currently graph executor does not have parallelism logic, and the 
existing data parallelism
+solution only supports parallel on homogeneous chipset(device). Then, the only 
way to do batch processing
+on heterogeneous devices with TVM is to treat a whole ML network as a schedule 
unit and run it on
+different heterogeneous devices, but that would cause latency issue (low speed 
chipset becomes the
+latency bottleneck for single data processing).
+
+Therefore, we need a runtime executor that can provide parallel scheduling 
functionality
+with a finer-grained schedule unit like subgraph (a group of operator with 
dependency relation)
+to be more efficient to use SOC heterogeneous hardware resource to achieve a 
better performance.
+
+
+### Benefits of Pipeline Executor
+
+There are three benefits for Pipeline Executor
+
+Pipeline Executor provides:
+* Compute a single network on multiple backends in parallel to improve 
performance.
+
+* Use RPC to perform distributed computation cross multiple remote devices.
+
+* Pipeline executor provide the capability to integrate non-DNN model function.
+
+## 3. Guide-level explanation
+Pipeline Executor is a runtime executor which implements pipeline execution 
logic for multiple
+subgraphs and relies on graph_executor for operator storage and execution.
+
+This section introduces the use case for Pipeline Executor.
+
+* 1. Using Automatic Graph Split feature to construct pipeline subgraph and 
configuration.
+* 2. Use pipeline_executor to build a pipeline module with the subgraphs and 
configuration.
+* 3. Use pipeline_executor to load the pipeline module to run network in 
pipeline parallelism mode.
+
+### 3.1. Using Automatic Graph Split feature to construct pipeline subgraph 
and configuration.
+
+This feature not in this RFC scope. the logic as following.
+
+this solution include 3 steps, 1. Operator Auto tune, 2. Graph dependency tree 
build and balance, 
+3. Graph Auto Tune. following are more detail.
+
+#### 3.1.1 Operator Auto Tune :
+
+* a. In operator Auto tune tune section, user would using existing tuning 
logic to tune the every operator,
+but the tune would separately and serialized happen in all target involved by 
pipeline executor.
+
+* b. After operator tune done , here can get performance data, for example , 
con2d_0 best perf in
+GPU is 3ms, in VTA is 2ms etc, this perf data would get used in later Graph 
dependency tree build
+balance step.
+
+#### 3.1.2. Graph dependency tree build balance
+
+* a. Initialize a DAG, the node of the DAG is subgraph, initially for a N node 
DAG, first [1, N -1] node mapping to
+[1 , N-1] layer(compute density operator and others) of original compute 
graph, the number N node is
+mapping to [N, M] layer , M here is the original compute layer number.
+
+* b. by using the perf data generated in 3.1.1.b , every dependency tree node 
can get a time consume value,
+the time consume value for difference node not at beginning is not same, then 
we call this DAG is not balanced in 
+weight of node, by using the way to adjust the node(subgraph) scope(how many 
operator in this node), we make
+every node of the DAG become same or value closed on weight(same time 
consume), then such DAG is a graph split
+solution,
+here we use DAG is to record the parent/child relation that child only can run 
after parent runned, and the scope
+adjustment only can hapen between parent and child.
+
+### 3.1.3 Graph Auto Tune.
+* a. 3.1.2 can generate more than one subgraph split solution DAG, in this 
step, Graph Auto Tune would try these
+multiple solution to get best configuration.
+
+after 1. 2. 3. , here can get an automatic graph split configuration.
+
+### 3.2. Use pipeline_executor to build pipeline module with the said subgraph 
and configuration.
+
+Pipeline executor provide a build function to compile and save the compile 
output into disk,

Review comment:
       fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to