manupa-arm commented on a change in pull request #9: URL: https://github.com/apache/tvm-rfcs/pull/9#discussion_r687400089
########## File path: rfcs/0009_Unified_Static_Memory_Planning.md ########## @@ -0,0 +1,476 @@ + Feature Name: Unified Static Memory Planner + Start Date: 2021 June 1 + RFC PR: #0009 + GitHub Issue: https://github.com/apache/tvm/issues/8404 + +# Background + +Currently, given a ML model primarily TVM will generate two main artifacts : + +* A1 : executor configuration : the description of the sequential execution of operators + 1. If the "executor" is "graph", this would be a JSON + 2. if the "executor" is "aot", this would be a main function describing call graph of operators + 3. if the "executor" is "vm", this would be a series of VM bytecode instructions +* A2 : library of operators (in the form of runtime.Module) +* A3 : compiled parameters of the model + +A1 is generally created out of lowering the "main" relay function and A2 is created lowering fused relay primitive functions → TIR PrimFuncs → C or LLVM artifacts of the operator library. + +### Is there some sort of memory planning already being performed ? + +Yes, there is. + +For A1, the inter-(fused) operator tensors are visible in the "main" relay function. There exists currently a Relay level pass known as "GraphPlanMemory" that works on the Relay IR to share the space used by tensors which are not live simultaneously and are visible between (fused) operators . Currently, the said pass will use Shared Memory Buffer Object memory planning scheme (See https://blog.tensorflow.org/2020/10/optimizing-tensorflow-lite-runtime.html) to perform the planning. + +For A2, the operators are lowered to TIR PrimFuncs. There exist a pass called StorageRewrite that more or less does the same thing as "GraphPlanMemory" but on TIR for the tensors visible within (fused) operators and are not live simultaneously. + +# Motivation + +For embedded use-cases, its widely accepted that aggressive memory optimizations are vital. Intially we are looking at enable memory planning for embedded use-cases using the AoT executor. + +Therefore, there exist two main shortcomings of the current approach : + +* The memory used by intermediary tensors within operators are not shared between memory used by inter-operator tensors. + +Example TIR : +``` + primfn(placeholder_3: handle, placeholder_4: handle, placeholder_5: handle, T_cast_1: handle) -> () + attr = { "global_symbol" : "fused_nn_conv2d_add_fixed_point_multiply_clip_cast_cast_21" , "tir.noalias" : True} + buffers = {T_cast: Buffer(T_cast_2: Pointer(int16), int16, [ 1 , 56 , 56 , 128 ], []), + placeholder_2: Buffer(placeholder_6: Pointer(int32), int32, [ 1 , 1 , 1 , 128 ], []), + placeholder: Buffer(placeholder_7: Pointer(int16), int16, [ 1 , 56 , 56 , 128 ], []), + placeholder_1: Buffer(placeholder_8: Pointer(int16), int16, [ 3 , 3 , 128 , 1 ], [])} + + buffer_map = {placeholder_3: placeholder, placeholder_4: placeholder_1, placeholder_5: placeholder_2, T_cast_1: T_cast} { + attr [PaddedInput: Pointer(int16)] "storage_scope" = "global" ; + allocate(PaddedInput, int16, [ 430592 ]); + attr [DepthwiseConv2d: Pointer(int32)] "storage_scope" = "global" ; + + allocate(DepthwiseConv2d, int32, [ 401408 ]) { + for (i1: int32, 0 , 58 ) { + for (i2: int32, 0 , 58 ) { + for(i3: int32,0,128) { + PaddedInput[(((i1*7424) + (i2*128)) + i3)] = @tir.if_then_else(((((1<= i1) && (i1 < 57)) && (1<= i2)) && (i2 < 57)), (int16*)placeholder_7[((((i1*7168) + (i2* 128 )) + i3) - 7296)], 0i16, dtype=int16) + } +``` + +The above TIR snippet shows that two intra operator buffers PaddedInput, DepthwiseConv2d are not visible for optimization by the Relay-level GraphPlanMemory approach. + +* Assumption of local optimization : performing sharing inside the operator first and sub-subsequently sharing that workspace with inter-operator tensors, would be sub-optimal. + +Thus, for the embedded use-cases, we'd need a unified static memory planner that performs memory planning of all tensors holistically to achieve best memory utilization. + +# Goals + +G1. There would be no TVMBackendAlloc(/Free)Workspace calls generated for tir.allocates that could be evaluated at compile time. + +Currently, the TVM codegen and the AoT executor relies on TVMB(A/F)W calls to increment/decrement a pointer of user provided workspace buffer. By the end of this set of work, if the backend uses Unified Static Memory Planning, there should not be TVMB(A/F)W calls rather correct offset in to the user provided buffer should be codegen'd for allocates for which the size argument could be evaluated at compile time. The dynamically sized allocates will remain untouched, thus will be lowered as usual. + +G2. The static memory planning algorithm should be changeable. + +There are a variety of memory planning algorithms in discussion with different tradeoffs (See https://discuss.tvm.apache.org/t/discussion-alignment-memory-planning/9730 and https://blog.tensorflow.org/2020/10/optimizing-tensorflow-lite-runtime.html). Depending on the topology and schedules of intermediary buffers, the memory planning algorithm should easily be able to be change able. However, the current design ties the algorithm intimately to the IR constructs – making it harder to modularize / change the algorithm w/o inventing a whole new pass. In reality, the outcome of USMP's algorithm is offsets within a given workspace buffer. Moreover, to produce that it should only need to know the sizes of each tensor and their relative liveness. Therefore, the algorithm interface to USMP should be kept simple to be able to add more algorithms. + +G3. Multiple pool support (including constants) + +Ideally, the user would expect to provide these buffers in the granularity of the memories they'd want to pin them to. E.g., if there are two RW memories : DRAM and SRAM, the buffers need to be identified and pooled by the compiler. Similiarly, for constant data, we need to have a mechanism to allow user to pin them to appropriate memories and addresses in the IR would simply be offsets into the constant buffer(s) provided by the user + +# Guide-level explanation + +## U1: Most simple use case + +### TVMC + + +``` +tvmc compile my_model.tflite --executor=aot --output-format=mlf --target=c +``` + + ### Codegen'd artifacts + + +``` + `//Codegen'd artifacts in metadata.c (lib0.c)` + const TVMModel my_model = { + ... + .entrypoint = &entrypoint, + } + + static uint8_t workspace_buffer[WORKSPACE_BUFFER_SIZE]; + static const uint8_t parameters_buffer[PARAMETERS_BUFFER_SIZE] = <compiler_generated_constant_data>; + + static int32_t entrypoint(TVMInputs_my_model* inputs, + TVMOutputs_my_model* outputs, + TVMContext* context){ + return my_model_main(inputs.input0, + outputs.output0, + &workspace_buffer, + parameters_buffer, + context.resource_handle); + } +``` +``` +// metadata.h + + typedef struct { + uint8_t* input0; + } TVMInputs_my_model; + + typedef struct { + uint8_t* output0; + } TVMOutputs_my_model; +``` + +### User Application +``` + + // The User Application + extern const TVMModel my_model; + int main(...) { + ... + TVMInputs_my_model inputs = {my_data}; + TVMOutputs_my_model outputs = {output_space}; + TVMExecute(&my_model, + &inputs, + &outputs, + NULL); + } +``` +## U2: User wants to share workspaces + +### TVMC +``` + tvmc compile my_model_1.tflite + --executor=aot + --output-format=mlf + --target=accel,c + --with-workspace-buffer= "name=sram;target=c,accel" + + tvmc compile my_model_2.tflite + --executor=aot + --output-format=mlf + --target=accel,c + --with-workspace-buffer= "name=sram;target=c,accel" +``` +### Codegen'd Artifacts +``` + //Codegen'd artifacts in metadata.c (lib0.c) + const TVMModel my_model_1 = { + ... + .entrypoint = &entrypoint, + } + + static const uint8_t parameters_buffer[PARAMETERS_BUFFER_SIZE] = <compiler_generated_constant_data>; + + static int32_t entrypoint(TVMInputs_my_model_1* inputs, + TVMOutputs_my_model_1* outputs, + TVMContext* context){ + return my_model_1_main(inputs.input0, + outputs.output0, + parameters_buffer, + context.workspaces.sram, + context.resource_handle); + } +``` +``` +// metadata.h + + #define TVM_MY_MODEL_1_SRAM_WORKSPACE_BUFFER_SIZE xxxx + + typedef struct { + uint8_t* sram; + } TVMWorkspaces_my_model_1; + + typedef struct { + uint8_t* input0; + } TVMInputs_my_model_1; + + typedef struct { + uint8_t* output0; + } TVMOutputs_my_model_1; + +`//Codegen'd artifacts in metadata.c (lib0.c)` + + const TVMModel my_model_2 = { + ... + .entrypoint = &entrypoint, + } +``` +``` + static const uint8_t parameters_buffer[PARAMETERS_BUFFER_SIZE] = <compiler_generated_constant_data>; + + static int32_t entrypoint(TVMInputs_my_model_2* inputs, + TVMOutputs_my_model_2* outputs, + TVMContext* context){ + return my_model_2_main(inputs.input0, + outputs.output0, + parameters_buffer, + context.workspaces.sram, + context.resource_handle); + } +``` +``` +// metadata.h + + #define TVM_MY_MODEL_2_SRAM_WORKSPACE_BUFFER_SIZE xxxx + + typedef struct { + uint8_t* sram; + } TVMWorkspaces_my_model_2; + + typedef struct { + uint8_t* input0; + } TVMInputs_my_model_2; + + typedef struct { + uint8_t* output0; + } TVMOutputs_my_model_2; +``` +### User Application +``` + // The User Application + extern const TVMModel my_model_1; + extern const TVMModel my_model_2; + + // Please calculate the maximum of TVM_MY_MODEL_1_SRAM_WORKSPACE_BUFFER_SIZE and TVM_MY_MODEL_2_SRAM_WORKSPACE_BUFFER_SIZE and define it as TVM_MY_MODELS_COMMON_WORKSPACE_BUFFER_SIZE + // Alternatively, user could use a malloc (if permitted and desired) for runtime calculation of the max + static uint8_t workspace_buffer[TVM_MY_MODELS_COMMON_WORKSPACE_BUFFER_SIZE]; + + int main(...) { + ... + TVMContext context; + TVMInputs_my_model_1 inputs = {my_data_1}; + TVMOutputs_my_model_1 outputs = {output_space_1}; + TVMWorkspaces_my_model_1 workspaces1 = { + .sram = &workspace_buffer, + }; + TVMSetWorkspaces(&context, &workspaces1); + TVMExecute(&my_model_1, &inputs_1, &outputs_1, &context); + ... + TVMInputs_my_model_2 inputs = {my_data_2}; + TVMOutputs_my_model_2 outputs = {output_space_2}; + TVMWorkspaces_my_model_2 workspaces2 = { + .sram = &workspace_buffer, + }; + TVMSetWorkspaces(&context, &workspaces2); + TVMExecute(&my_model_2, &inputs_2, &outputs_2, &context); + ... + } +``` +## U3 : User wants to pin buffers to different memories + +### TVMC +``` + tvmc compile my_model.tflite + --executor=aot + --target=accel,c + --with-workspace-buffer= "name=dtcm;target=c;size=1000" # Here the size is more of a hint/guide provided to USMP + --with-workspace-buffer= "name=sram;target=c,accel" + --with-parameter-buffer= "name=itcm;target=c;size=5000" # Here the size is more of a hint/guide provided to USMP + --with-parameter-buffer= "name=flash;target=c,accel" +``` +### Codegen'd Artifacts +``` + //Codegen'd artifacts in metadata.c (lib0.c) + const TVMModel my_model = { + ... + .entrypoint = &entrypoint, + } + + static int32_t entrypoint(TVMInputs_my_model* inputs, + TVMOutputs_my_model* outputs, + TVMContext* context){ + + return my_model_main(inputs.input0, + outputs.output0, + context.workspaces.dtcm, + context.workspaces.sram, + context.parameters.itcm, + context.parameters.flash, + context.resource_handle); + } +``` +``` +// metadata.h + + #define TVM_MY_MODEL_DTCM_WORKSPACE_BUFFER_SIZE xxxx + #define TVM_MY_MODEL_SRAM_WORKSPACE_BUFFER_SIZE xxxx + #define TVM_MY_MODEL_ITCM_PARAMETER_BUFFER_SIZE xxxx + #define TVM_MY_MODEL_FLASH_PARAMETER_BUFFER_SIZE xxxx + + typedef struct { + uint8_t* dtcm; + uint8_t* sram; + } TVMWorkspaces_my_model; + + typedef struct { + uint8_t* itcm; + uint8_t* flash; + } TVMParameters_my_model; + + typedef struct { + uint8_t* input0; + } TVMInputs_my_model; + + typedef struct { + uint8_t* output0; + } TVMOutputs_my_model; +``` +### User Application +``` + // The User Application + extern const TVMModel my_model; + __attribute__((section( "ITCM" ) const uint8_t my_model_params_1[TVM_MY_MODEL_ITCM_PARAMETER_BUFFER_SIZE] = <param_1_data>; + __attribute__((section( "FLASH" ), aligned( 16 ))) const uint8_t my_model_params_2[TVM_MY_MODEL_FLASH_PARAMETER_BUFFER_SIZE] = <param_2_data>; + __attribute__((section( "DTCM" ) static uint8_t workspace_buffer_1[TVM_MY_MODEL_DTCM_WORKSPACE_BUFFER_SIZE]; + __attribute__((section( "SRAM" ), aligned( 16 ))) static uint8_t workspace_buffer_2[TVM_MY_MODEL_SRAM_WORKSPACE_BUFFER_SIZE]; + + int main(...) { + ... + TVMContext context; + TVMInputs_my_model_1 inputs = {input}; + TVMOutputs_my_model_1 outputs = {output}; + TVMWorkspaces_my_model workspaces = { + .sram = &workspace_buffer_1, + .dtcm = &workspace_buffer_2, + }; + TVMParameters_my_model parameters = { + .flash = &my_model_params_1, + .itcm = &my_model_params_2 + }; + TVMSetWorkspaces(&context, &workspaces); + TVMSetParameters(&context, parameters); + TVMExecute(&my_model, &inputs, &outputs, &context); + } +``` +# Reference-level explanation + +## Overview + +This should be a IRModule (TIR) → IRModule (TIR) pass. + +Inputs : +* AoT TIR PrimFunc ( the control function describing the call graph to operators) Review comment: Hi Chris, > In the unified lowering flow we may choose to introduce explicit storage and tensor allocations into the relay AST, and then hoist memory planning out of the executors into the unified lowering flow. > [IRModule(relay)] -> transforms (incl. mem. planning) -> [IRModule(relay)] I dont see much value in doing memory planning at this level as the intra-operator allocate nodes are not visible. Can you elaborate ? > With customization in the lowering flow, executors such as AoT can allow per target planning, and then generate code by matching allocation nodes in the relay AST and lowering them directly to TIR when building the main function. Can you explain what do you mean by allocation nodes in the relay AST ? In our view, relay is a pure functional language that is designed as an IR to represent operator-level info. It feels wrong to do memory planning at this level. I guess Im more interested in knowing why graph and AoT, both cannot lower the main function to TIR before performing executor specific lowering ( that is generating the JSON or main function -- that is the executor specific lowering). A side point to this is -- as we have discussed in the discuss post -- "I think going forward graph executor might be able to load a packed function of the tvm_main instead of json – it’ll be less confusing as how the graph executor runtime is positioned as of today which is more of a (a very thin – as its supposed to be :) ) middleware that connect the graph json and the compiled operator library". Having said that, I can see this work enabling a path (to extend) towards that – though we only plan to create USMP component that is a TIR IRModule → TIR IRModule which we initially test and support for the AoT executor." https://discuss.tvm.apache.org/t/rfc-unified-static-memory-planning/10099/2 > with transforms including the ability to customize the transformations applied for different hardware targets. Moreover, transformations are applied at each stage of lowering and one or more of those transforms may include memory planning. I dont see why this cannot happen if we have the full program in TIR. > If we push forward with this RFC without co-designing it with unified lower, my concern is that we continue to kick the problem down the road and it will make unifying the lowering and planning for all executors more difficult. The existence of different executors are for different use-cases, therefore there is only so much unification we could do. IMO, we should only have two executors AoT and JIT (VM). Moreover, I dont feel static memory planning is applicable to the latter. I'd view graph executor as an application that support RPC (and other additional features such launch parallel for loops, etc) for AoT (Graph = AoT++ ?) that could be used in a tuning process (Refer to the earlier comment on the importance of using a main function instead of JSON). I'd like to hear why designing a static memory planner that works where the full program is expressed in TIR creates a divergence at least between graph and AoT. @jroesch @areusch @tqchen @mbaret -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
