huajsj commented on a change in pull request #14: URL: https://github.com/apache/tvm-rfcs/pull/14#discussion_r684453780
########## File path: rfcs/0012-pipeline-executor.md ########## @@ -0,0 +1,365 @@ +<!--- Licensed to the Apache Software Foundation (ASF) under one --> +<!--- or more contributor license agreements. See the NOTICE file --> +<!--- distributed with this work for additional information --> +<!--- regarding copyright ownership. The ASF licenses this file --> +<!--- to you under the Apache License, Version 2.0 (the --> +<!--- "License"); you may not use this file except in compliance --> +<!--- with the License. You may obtain a copy of the License at --> + +<!--- http://www.apache.org/licenses/LICENSE-2.0 --> + +<!--- Unless required by applicable law or agreed to in writing, --> +<!--- software distributed under the License is distributed on an --> +<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY --> +<!--- KIND, either express or implied. See the License for the --> +<!--- specific language governing permissions and limitations --> +<!--- under the License. --> +- Feature Name: (fill me in with a unique identifier, `my_awesome_feature`) +- Start Date: (fill me in with today's date, YYYY-MM-DD) +- RFC PR: [apache/tvm-rfcs#0014](https://github.com/apache/tvm-rfcs/pull/0014) +- GitHub Issue: [apache/tvm#8596](https://github.com/apache/tvm/issues/8596) + +## 1. Summary + + +This proposal introduces Pipeline Executor: A runtime executor that by scheduling +splitted subgraph of relay graph in pipeline to implement task level parallism to +reduce compute latency. + +## 2. Motivation + + + +Currently more and more edge device inference deployments happen on SOC devices. +Since SOC devices have heterogeneous chipset like GPU, FPGA, CPU, DSP, etc. To reach the best +performance, there is a requirement to run an ML network in these heterogeneous chipsets. +However, currently graph executor does not have parallelism logic, and the existing data parallelism +solution only supports parallel on homogeneous chipset(device). Then, the only way to do batch processing +on heterogeneous devices with TVM is to treat a whole ML network as a schedule unit and run it on +different heterogeneous devices, but that would cause latency issue (low speed chipset becomes the +latency bottleneck for single data processing). + +Therefore, we need a runtime executor that can provide parallel scheduling functionality +with a finer-grained schedule unit like subgraph (a group of operator with dependency relation) +to be more efficient to use SOC heterogeneous hardware resource to achieve a better performance. + + +### Benefits of Pipeline Executor + +There are three benefits for Pipeline Executor + +Pipeline Executor provides: +* Compute a single network on multiple backends in parallel to improve performance. + +* Use RPC to perform distributed computation cross multiple remote devices. + +* User can use Pipeline Executor to integrate pre-compute processing and pos-processing with + network compute together and compute in same executor. + +## 3. Guide-level explanation +Pipeline Executor is a runtime executor which implements pipeline execution logic for multiple +subgraphs and relies on graph_executor for operator storage and execution. + +This section introduces the use case for Pipeline Executor. + +* 1. Manually constructing pipeline subgraphs from a network compute graph. +* 2. Manually constructing pipeline subgraph configuration for dependency and target device. +* 3. Use pipeline_executor to build a pipeline module with the subgraphs and configuration. +* 4. Use pipeline_executor to load the pipeline module to run network in pipeline parallelism mode. + +### 3.1. Manually constructing pipeline subgraph from a network compute graph. +pipeline subgraph is subset of network compute graph, there are dependency relation +between different pipeline subgraph, each pipeline subgraph running on different backend +, the purpose of split network into pipeline subgraph is to do network compute on different +compute unit and pipeline them to reduce compute latency, following is example for network +compute graph split. + +```python +import tvm +from ...ir import IRModule +from ...relay import transform, build_module +def pipeline_graph(expr, indices): + """Split Graph Into A Group Of Subgraph + Parameters + ---------- + expr : tvm.relay.Expr + indices : Array[int] + Returns + ------- + ret : Array[tvm.relay.IRModule] + """ + + def run_opt_pass(expr, opt_pass): + """Exectue a relay pass""" + assert isinstance(opt_pass, tvm.transform.Pass) + mod = tvm.IRModule.from_expr(expr) + mod = tvm.relay.transform.InferType()(mod) + mod = opt_pass(mod) + entry = mod["main"] + return entry if isinstance(expr, tvm.relay.Function) else entry.body + + def _operator_idx_inc(expr, operator_current_idx): + """Increase operator index""" + if not isinstance(expr, tvm.relay.expr.Constant): + operator_current_idx = operator_current_idx + 1 + + return operator_current_idx + + def merge_constant_expr(constant_expr, expr): + # merge constant express with a express + # Parameters + # ---------- + # constant_expr: + # constant expression + # expr: + # expression to merge with constant expression + + # If body not let, then reached end of the express + if not isinstance(constant_expr.body, tvm.relay.expr.Let): + return tvm.relay.expr.Let(constant_expr.var, constant_expr.value, expr) + + return tvm.relay.expr.Let( + constant_expr.var, constant_expr.value, merge_constant_expr(constant_expr.body, expr) + ) + + def _recursion(anf, operator_indx, pipeline_mods, indices, constant_expr): + # Enumrate all operator of compute graph then split the compute graph + # into a group subgraph. + # Parameters + # ---------- + # anf: + # ANF format expression + # operator_indx: + # current operator indice + # pipeline_mods: + # the subgraph list get storage in this variable + # indices: + # Array of indices use to define the subgraph scope + # constant_expr: + # constant defined before current operator + + # Do the split work + if isinstance(anf, tvm.relay.Function): + return tvm.relay.Function( + anf.params, + _recursion(anf.body, operator_indx, pipeline_mods, indices, constant_expr), + anf.ret_type, + anf.type_params, + anf.attrs, + ) + if isinstance(anf, tvm.relay.expr.Let): + value = anf.value + operator_indx = _operator_idx_inc(value, operator_indx) + + # record constan expr to make sure all sugraph can find correct + # constant. + if isinstance(value, tvm.relay.expr.Constant): + if not constant_expr: + constant_expr = tvm.relay.expr.Let(anf.var, value, anf.var) + else: + constant_expr = tvm.relay.expr.Let(anf.var, value, constant_expr) + + if isinstance(value, tvm.relay.expr.Call): + if isinstance(value.op, tvm.ir.Op): + + # if have expr a(b(c(d(e)))) and indexes are [1,2,3] + # then would get separate modules for a(b),c,d(e). + # the split area is a(b)[0,1] c[2,2] d(e)[2,3] + if indices and operator_indx == indices[0]: + indices.pop(0) + ann = _recursion( + anf.body, operator_indx, pipeline_mods, indices, constant_expr + ) + + # when current subgraph use previous subgraph constant, + # such constant may become free varaible due to the constant + # not exist, merge the previous constant with current subgraph + # to avoid such issue. + if constant_expr: + ann = merge_constant_expr(constant_expr, ann) + + ann = run_opt_pass(ann, transform.ToGraphNormalForm()) + mod = tvm.IRModule.from_expr(ann) + pipeline_mods.insert(0, mod) + return tvm.relay.expr.Let(anf.var, value, anf.var) + return tvm.relay.expr.Let( + anf.var, + value, + _recursion(anf.body, operator_indx, pipeline_mods, indices, constant_expr), + ) + else: + return anf + + pipeline_mods = [] + + # operator count start from 0, then initial value get set into -1 + operator_indx = -1 + constant_expr = None + subgraph_indices = indices.copy() + anf = run_opt_pass(expr, transform.ToANormalForm()) + anf = run_opt_pass(anf, transform.InferType()) + ann = _recursion(anf, operator_indx, pipeline_mods, subgraph_indices, constant_expr) + ann = run_opt_pass(ann.body, transform.ToGraphNormalForm()) + mod = tvm.IRModule.from_expr(ann) + pipeline_mods.insert(0, mod) + return pipeline_mods + +#... +mod, params = relay.frontend.from_darknet(net, dtype=dtype, shape=dshape) +split = [11, 22] Review comment: currently pipeline executor rely on manually "graph split" and "configuration", this part logic just be a example to demo how to manually generate the output and not be part of this RFC solution, in further we would go through "auto graph split" feature to hide this complexity and make such graph split logic be transparent to user, I would update the document to avoid confuse. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
