jroesch commented on pull request #8775: URL: https://github.com/apache/tvm/pull/8775#issuecomment-902319579
@comaniac Hey Cody, Sorry for slow response, been having a busy week. Let me clarify all my assumptions and models and see if we can clarify from there: 1. I think exposing `compile_engine.py` was a mistake that was made at some point in the past, we intentionally made `compile_engine.h` a private header to not make it part of the public interface. For whatever reason it has been implicitly public due to being in Python so we now need to deal with that. 2. I think we should only have 1 endorsed way to control scheduling, and a unified single lowering mechanism. A lot of the pain for accelerator flows and BYOC, etc are due to the fact there are many ways people use the "compile engine style API". 3. My intention is to eventually make it impossible to lower functions one by one (i.e JIT/Lower/LowerInternal) as it doesn't really match any of the current compilation flows well and causes issues. 4. We can split out the internals of TECompiler to provide a functional API for producing tensors from a Relay function which is then used by the compiler and potentially you in your use case. 5. Ideally end state is that the TECompiler itself would be part of the private interface and not exposed in Python. I am happy to support use cases along the way to this world if I better understand critical need. I do agree that we need to move the few registered helped functions, but other then that `compile_engine` is dead code. All the other uses in the code base are just calls to turn off the cache. Currently graph_executor, vm, aot, and interpreter have all moved off that API. My main question is what use case really needs the existing API or would more fine grained helper APIs allow you to do the customization you want. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
