Lunderberg commented on PR #15241:
URL: https://github.com/apache/tvm/pull/15241#issuecomment-1625968118

   > can you clarify an example that leverages CodegenCPU for kernel.
   
   @tqchen Certainly.  This mainly comes up in a few edge cases found when 
debugging a single-module lowering flow 
(https://github.com/apache/tvm/pull/14985), used for 
https://github.com/apache/tvm/pull/14862.  The issue arose when a `kDLExtDev` 
target or a custom `TIRToRuntime` hook was implemented by subclassing 
`CodeGenCPU` or `CodeGenCHost` (e.g. 
[`CodeGenCMSISNN`](https://github.com/apache/tvm/blob/main/src/relay/backend/contrib/cmsisnn/tir_to_runtime.cc#L39)).
  In those cases, the base class assumes that it is safe to return an error 
code (e.g. in 
[`CodeGenCHost::PrintGetFuncFromBackend`](https://github.com/apache/tvm/blob/main/src/target/source/codegen_c_host.cc#L221),
 even if that occurs within a portion that has been separated into an 
independent function.
   
   These cases are mostly suppressed by the fix in 
https://github.com/apache/tvm/pull/15102, but can still happen if there's an 
explicit `T.target("my_custom_extension", host="llvm")`.  In those cases, the 
compute kernels occur within a function generated by `"my_custom_extension"`, 
with the DLTensor-unpacking should still be handled by the usual LLVM codegen.
   
   > Just want to make sure that our existing path of GPU kernel separated 
codegen continues to function as error there are propagated by call packed 
mechanism
   
   Definitely agreed.  I updated the original PR to limit the `int32_t` return 
type to targets that may be executed on the CPU, so that the separated GPU 
kernels are unaffected.  This is sufficient for the functionality in 
https://github.com/apache/tvm/pull/14862, while avoiding changes to the GPU 
path.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to