areusch commented on pull request #7742:
URL: https://github.com/apache/tvm/pull/7742#issuecomment-856072634


   @stoa thanks for your detailed reply. i've tried to provide answers to all 
of your questions--let me know if a higher-bandwidth discussion would be 
helpful to align here (we can use the discord and document here)
   
   > Do we agree on the following "software stack" ?:
   i'd draw the software stack like so:
   ```
         deployed inference_application       generic app for host-driven 
inference/autotvm
                   |                                                           |
          firmware-facing API                                                  |
                   |                                                           |
                  AOT [c]          graph executor ----- c_runtime_api -- 
utvm_rpc_server
                      |                    |                 |
                     compiled operators [c,so]               |
                                     |                       |
                         c_backend_api.[h,c]  <--------------+
                                 |
                       platform-specific[platform.h, platform-specific 
implementation location]
   ```
   
   Does this make sense? The c_backend_api is mainly a glue layer between the 
API exposed to operator functions and the platform-specific API, but it allows 
them to change independently of one another.
   
   one note I'll make: right now "deployed inference application" as 
demonstrated/tested against Zephyr would use graph_executor and c_runtime_api. 
This is just due to the fact that we haven't finished developing the 
firmware-facing AOT interface yet.
   
   > Do we agree that, in the AOT path, we do not want to include/compile the 
c_runtime_api.[h,c] ?
   
   yeah definitely. i hope the above diagram clarifies my position on that, and 
apologies for any confusion. Stuff is changing fast right now.
   
   > Do we agree that the 'g_last_error' (TVMGetSetLastError()) is set by the 
running inference application and not by the lower runtime layers: 
c_runtime_api|platform_specific ?
   
   i think right now anything above c_backend_api could set that. c_runtime_api 
should be allowed to consume c_backend_api imo.
   
   > I can propose to create some sort of 'c_runtime_defs.h' that would include 
platform-specific defines for:
   >
   >AI_API_ENTRY = TVM_DLL
   >AI_API_ALIGNED
   >AI_API_WEAK = TVM_WEAK
   >etc.
   >The platform-specific implementation can include this file then instead of 
the 'c_runtime_api.h'.
   >Still the question of testing all different defines remain - but this is 
already the case with the 'c_runtime_api.h'.
   
   i think one question is whether we want to promote the `AI_API_` prefix to 
be the TVM standard prefix. if so, this makes sense to me. however, i think 
such a proposal would need to be explicitly in an RFC on the discuss. I think 
we'd like to ultimately wind up with something compatible either way, but 
perhaps it makes sense to confine this `c_runtime_defs.h` to the STM32 code for 
now and promote it once the API settles. One thing is that since AI_ prefix is 
sort of used by STM32 code, using it here would mean that we may need to be 
careful about stepping on you--whereas keeping with `tvm_` prefix should be 
more safe.
   
   > Moving to a different code location is not an issue, please let us know 
which directory is most suitable.
   > And I also see that there is no reason for us to not reformat the C code - 
OK.
   
   OK--here's what i think:
   1. if it stays in `src/runtime/contrib/stm32`, it needs to get formatted to 
the TVM guidelines
   2. if it lives outside of `src`, particularly in e.g. `apps/microtvm/stm32`, 
i don't think the formatting needs to follow TVM guidelines. 
   
   i've been working on the project-generator more this past week and will post 
it this week. in that branch, i've moved all of the zephyr code into 
[`apps/microtvm/zephyr/demo_runtime`](https://github.com/areusch/incubator-tvm/tree/project-generator/apps/microtvm/zephyr/demo_runtime)--in
 particular, the "emitter" is in `microtvm_api_server.py`. i'd ideally like to 
keep all the platform-specific stuff underneath `apps/microtvm/<platform>`, 
which allows TVM to interface a single target (the Model Library Format + 
microtvm API server) but continue to target multiple platforms. in exchange, 
code in that directory doesn't need to necessarily follow all of TVM's C 
formatting standards (e.g. there is iOS code in some of those directories).
   
   i realize we had discussed this approach for stm32 before, but things 
weren't quite ready; plus you guys wanted to provide an improved API. we've 
discussed moving in the direction of improving microTVM's API such that the 
STM32 API can be provided by wrapping the microTVM API. when we reach that 
point, i'd like to move the STM32 code underneath `apps/microtvm/stm32` and use 
the project API (but, we may need to add to the API to allow generating 
libraries rather than whole projects, since the project generation for STM32 is 
handled by X-Cube tooling). 
   
   if you want to locate your code in `apps/microtvm/stm32` now and avoid 
reformatting, even if it doesn't use project generator, that would also be fine 
with me--let me know if this makes sense or not to you guys.
   
   > Yes, intentional. We do not have a possibility to host these test-cases 
for downloading and they cannot be dowloaded from elsewhere.
   
   If you want to push a PR to https://github.com/tlc-pack/web-data (see 
`testdata/microTVM`), we could host them there.
   
   > Resuming: If we agree on re-organizing the 'c_runtime-api' as outlined 
above, I will align this PR and we will not need a separate PR in this case.
   
   yeah that sounds fine to me. ideally it'd be a separate PR just to keep 
things neat, but it's not a big change so if it keeps things operationally 
simpler for you, feel free to include it here.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to