mbs-octoml commented on a change in pull request #62: URL: https://github.com/apache/tvm-rfcs/pull/62#discussion_r836623641
########## File path: rfcs/xxxx-collage.md ########## @@ -0,0 +1,987 @@ +# Design Doc: Collage [Draft 0.8] + +``` +Feature Name: Collage +Start Date: Mar 2022 +Authors: Mark Shields ([email protected]) +RFC PR: <tbd> +GitHub Issue: <tbd> + +History: +- v0.7: First draft. +- v0.8: Rework to emphasise 'partitioning' (quite early in pipeline) instead of 'fusion' (quite late in pipeline). +``` + +This design doc (with accompanying +['v2' prototype implementation](https://github.com/mbs-octoml/mbs-tvm/tree/mbs-collage-sketch)) +shows how to bring tuning to TVM's BYOC partitioning passes. The tuning search explores the choice of sub-graphs (aka ' +partitions') and toolchains (aka 'backends') so as to minimize the expected model inference latency. Both 'graph +style' (eg TensorRT) and 'library style' (eg DNNL) BYOC integrations are supported. We call the result an 'optimal +partitioning'. This new tuning layer complements the tuning traditionally done by TVM and other toolchains during +lowering. It can also complement any global tuning, for example to explore the choice of layout convention or device +assignment. + +The approach is based on the [preprint](https://arxiv.org/pdf/2111.00655.pdf): + +> *Collage: Automated Integration of Deep Learning Backends* +> Byungsoo Jeon, Sunghyun Park, Peiyuan Liao, Sheng Xu, Tianqi Chen, Zhihao Jia + +(See Appendix A for a comparison of this proposal and the paper's implementation. See Appendix D for TODO items in the ' +v2' prototype.) + +This tuning approach contrasts with TVM's existing "greedy" and "manual" approaches to partitioning: + +- Greedy: Currently only the largest possible supported sub-graphs are used for partitions, irrespective of their + execution time. With Collage many more candidate sub-graphs are explored, and it is possible for two smaller + sub-graphs to yield better overall latency than one large sub-graph if they mix toolchains. +- Manual: Currently the TVM user must commit to a BYOC toolchain and invoke the corresponding + `partition_for_<toolchain>` function before the main TVM compilation flow begins. With Collage the choice of toolchain + can be automated based on measured latency. Collage will also explore mixing and matching between multiple BYOC + toolchains as well as TVM's native backend. + +When Collage is enabled it subsumes the existing `MergeComposite`/`AnnotateTarget`/`MergeCompilerRegions`/ +`PartitionGraph` passes embedded within each `partition_for_<toolchain>` function with a single new +`CollagePartitioner` pass. The pass is guided by the list of available `Target`s and three existing sources: + +1. The `"TOpPattern"` attributes provided for every Relay operator and used by TVM's built-in `FuseOps`. +2. The BYOC `"target.<toolchain>"` operator predicates provided for some operator/toolchain pairs by + 'operator-based' BYOC integrations. +3. The BYOC operator pattern/predicates (usually) registered in the pattern table by 'pattern-based' BYOC integrations. + +Only some boilerplate aspects of existing BYOC integrations need to be adjusted to support Collage (and we will make +these changes either as part of or in coordination with the UMA project). However Collage may require more robustness +from the BYOC integrations, see Appendix F. + +Note however that we are **not** proposing to deprecate the existing `partition_for_<toolchain>` operations (or their +UMA equivalent). This is mostly because Collage is inherently a tuning-based system which is not practical for users who +need a stand-alone compiler. But it is also because of challenges with establishing a common pass ordering which will +work for both TVM and all BYOC toolchains (see Appendix C for more details). + +Collage offers three advantages: + +- **Latency**: Overall model latency may be reduced compared to TVM native, TVM with a single + `partition_for_<toolchain>` call, or a non-TVM stand-alone compiler such as TensorRT. +- **Automation**: The choice of which BYOC toolchains to enable can be automated. +- **Economy and modularity of implementation**: Four standalone passes using two separate mechanisms for expressing + partitioning rules/algorithms can be replaced with one, which itself is built from compositional primitives. (The + machinery is also reusable for the very similar problem of choosing TVM fusion kernels, which we'll tackle in the + future). + +See Appendix H for some frequently asked questions. + +## Success Metrics + +1. Collage offers at least a 10% latency improvement for a selection of standard ONNX models and NVIDIA hardware using + targets which include the CuDNN and CuBlas libraries, the CUTLASS library (with tuning, via BYOC), the TensorRT + compiler (via BYOC), and (obviously!) TVM native. +2. Collage does not require new per-target or per-model patterns or rules to be implemented independently of the BYOC + integrations. +3. Collage with a `Target` list enabling just one BYOC toolchain is never worse than using the the existing Review comment: Added a 'without loss of generality' comment. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
