stoa commented on a change in pull request #7742:
URL: https://github.com/apache/tvm/pull/7742#discussion_r630528798
##########
File path: src/runtime/stm32/graph_runtime.cc
##########
@@ -0,0 +1,529 @@
+/*
Review comment:
Hello, Andrew @areusch
My concern is imposing to implement the TVMLogf and the TVMPlatformAbort.
While we implement the TVMPlatformAlloc as part of the stm32 runtime (this is a
fairly common functionality across various boards), the TVMLogf and the
TVMPlatformAbort are different. They are really board and even
application-specific. We would like to avoid having hooks from the runtime back
to the application. Returning the error codes from the crt_backend_api seems to
be our commonly preferred way of handling errors here - I am very interested in
this being proposed for the mTVM.
> We'd have to generate error-checking code in operator implementations,
since that is where TVMBackendAllocWorkspace is called from. It's hard to
generate macros such as CHECK_EQ there.
I am not sure I understand this point. If the memory allocation fails, we
have 2 choices:
- The operator code catches it and SetLastError(), eventually with a
detailed error code. I do not see why this is difficult or not clean - but
perhaps I do not have enough vision here. In any case, this seems to have
already been implemented with the TVM operators, Is this going to change ?
Which RFC ?
- The crt layer catches the error and handles via further hooks into the
application code. We prefer to minimize such hooks.
I am not sure why you have mentioned the CHECK_EQ macro ?
So, I am not sure how I should proceed with the test. I could do this:
1. I move the Set/GetLatError from the crt_runtime_api.c to the
crt_backend_api.c.
2. We include the crt_backend_api.c with our test. The TVMLogf and the
TVMPlatformAbort will be implemented 'dummy' with the stm32 runtime.
Is this gonna work for you ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]