manupa-arm commented on a change in pull request #11: URL: https://github.com/apache/tvm-rfcs/pull/11#discussion_r687689771
########## File path: rfcs/0011_Arm_Ethos-U_Integration.md ########## @@ -0,0 +1,233 @@ + Feature Name: Arm® Ethos™-U Integration + Start Date: 2020 May + RFC PR: https://github.com/apache/tvm-rfcs/pull/11 + GitHub Issue: https://github.com/apache/tvm/issues/8482 + +# Motivation + +Arm® Ethos™-U is a series of NPUs that will enable low-cost and highly efficient AI solutions for a wide range of embedded devices. This RFC introduces the port of Ethos-U into the uTVM compilation flow. The process of compilation relies on the multiple levels of abstraction in TVM and a variety of analysis and optimisation passes to produce c output. In the process of compilation, we rely on the many levels of TVM's IR (and the passes) to perform optimizations to create c-sources that can work with current microTVM deployments. + +## Scope: + +### Ethos™-U55 + + + +Ethos™-U55 is a NPU that is designed to uplift ML performance by working as an offload target for micro-controllers. It can accelerate quantized ML operators such as Convolution2D, Depthwise Convolution, Pooling and Elementwise Operators. For convolution-type operators, Ethos-U55 supports hardware enabled loseless de-compression of weights to increase inference performance and reduce power. + +The scope for this RFC is to add support for offloading to the Arm Ethos-U55 NPU. The initial machine learning framework that we use for testing this is TensorFlow Lite. Future RFCs and pull requests will address additional NPUs, such as the Ethos-U65, and other frameworks as the port evolves. + +Please refer to Technical Reference Manual (TRM) for more details – https://developer.arm.com/documentation/102420/0200. +* Reference : https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u55 + +# Guide-level explanation + +## TVMC User Interface +``` +tvmc compile my_model.tflite +--executor=aot +--output-format=mlf +--target="ethos-u --accelerator-config=ethos-u55-xxx",c" ---> Model Library Format + +# where xxx could be out of possible configuration of the accelerator that can take values : [32, 64, 128, 256] +``` + +The users should be able to use the above command to compile to ethos-u55 that would generate Model Library Format(MLF) output. +Please take a look at our provided example in the last PR (once its published). + +## Design Architecture Overview + + + +We rely on the graph partitioning infrastructure in Relay (commonly known as BYOC) to integrate the Relay and TIR pass pipeline to generate c-source artifacts that could be used in an embedded deployment environment. Therefore, the generated c-sources are expected to be bundled with AOT executor in the Model Library Format (MLF) tarball. The embedded user can easily use MLF.tar as he/she would use it with the AOT executor in a typical embedded environment. + +### Why are the operators lowered to TIR before runtime.Module is created ? + +The two main reasons are as follows : + +#### Cascading-style performance and memory optimizations : + +Given the deterministic nature of the hardware, we intend to utilize the the TVM's scheduling language to perform inter and intra operator optimizations to reduce memory footprint while maintaining good performance. + +Please refer to this discuss post for more information : https://discuss.tvm.apache.org/t/rfc-cascade-scheduling/8119/8 + +#### Unified static memory planning : + +Ethos™-U is a NPU that is aimed at running with microTVM. Therefore, as with typical usecases of microTVM, Ethos™-U NPU will require aggressive memory optimizations by sharing buffers with intermediaries used by the CPU. +We envision a flow to expose the TIR generated by Ethos™-U codegen to future unified static memory planner to be optimized. + +For more information about the proposed unified static memory planner, please refer to this discuss post : https://discuss.tvm.apache.org/t/rfc-unified-static-memory-planning/10099. + +# Reference-level explanation + +## Compilation flow + +### C1. TVM Frontend and Partitioning + +The Relay graph as lowered from the TVM's frontend will be partitioned into Ethos-U subgraphs via running AnnotateTarget, MergeCompilerRegions and PartitionGraph Relay passes. Therefore, this procedure will result in the creation of "external" Relay functions that are re-directed to Ethos-U Relay and TIR pass pipeline for the creation of c-source as stated above. + +``` +# A Partitioned example for Conv2D + +def @main(%input: Tensor[(1, 300, 300, 3), int8]) -> Tensor[(1, 298, 298, 32), int8] { + @ethosu_0(%input) /* ty=Tensor[(1, 298, 298, 32), int8] */ +} + +def @ethosu_0(%ethosu_0_i0: Tensor[(1, 300, 300, 3), int8], Compiler="ethosu", ...) { + %2 = fn (%FunctionVar_0_0: Tensor[(1, 300, 300, 3), int8], + PartitionedFromPattern="qnn.conv2d_nn.bias_add_qnn.requantize_", + Composite="ethosu.qnn_conv2d") { + %0 = qnn.conv2d(%FunctionVar_0_0, meta[relay.Constant][0], -26, ...); + %1 = nn.bias_add(%0, meta[relay.Constant][2], axis=3); + qnn.requantize(%1, meta[relay.Constant][3], 0, 12341.8f, 0, out_dtype="int8") + }; + %2(%ethosu_0_i0) +} +``` + + +### C2. Relay Legalization to Ethos™-U HW Primitive operations. + +In the design, we have decided to introduce TEs that closely describes the compute of each primitive operation that the hardware can natively execute – that we define as Ethos™-U HW primitive operations in their own Relay operators. Moreover, there are many Relay operators that could be lowered to the Ethos™-U HW primitives (e.g., dense could be legalized to a conv2d operator). This component will legalize the external Relay function to Ethos™-U HW primitive operations. + +Ethos™-U hardware supports per-channel quantization through via encoding a scale with each bias value. Thus, the weight scales are converted to that format and packed with the biases. Thereafter, the packed bias and scales are made to a constant input to the Relay operator. +For more details, please refer to : https://developer.arm.com/documentation/102420/0200 + +``` +# This is the above partitioned function legalized to ethosu.conv2d operator. + +fn (%ethosu_0_i0: Tensor[(1, 300, 300, 3), int8], ..., global_symbol="ethosu_0", Primitive=1) { + contrib.ethosu.conv2d(%ethosu_0_i0, meta[relay.Constant][0], meta[relay.Constant][1], -26, ...) +} +``` + + +### C3. Ethos™-U TE/TIR Compiler Passes + +At this stage, we should have a TE representation of all HW primitive operations that belong to the offloaded function. We will be scheduling the TE representation to TIR Primfunc that describes the intermediary storage and hardware operations that needed to be executed. In future, we are intending to add more TE/TIR passes make the Ethos™-U TE/TIR compiler perform memory and performance optimizations (See https://discuss.tvm.apache.org/t/rfc-cascade-scheduling/8119) . Therefore, its vital to have all the operations represented in TE/TIR. Its important to note that Ethos™-U hardware requires weights to be 'encoded' in a certain way to be readable by the hardware. Therefore, the weight encoding is performed here and represented in the TIR primfunc with post-encoding sizes as buffers. Review comment: Yes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
