quic-sanirudh commented on issue #17193:
URL: https://github.com/apache/tvm/issues/17193#issuecomment-2247268098
> Ok,I will add this to readme.
>
> However, there are some other questions:
>
> 1. Can we import other framework model such as onnx?
> 2. Can we import a float32 model and using AIMET quantify encoding
information?
> 3. Can we use qnn as the runtime by BYOC?
> 4. After import the inceptionv4 TFLITE model,and then trying to building
for it as:
>
> ```
> mod, params = relay.frontend.from_tflite(tflite_model)
> target = tvm.target.hexagon('v66', hvx=0)
> with tvm.transform.PassContext(opt_level=3):
> lib = relay.build(mod, tvm.target.Target(target, host=target),
params=params, mod_name="default")
> ```
>
> there is an error: `LLVM ERROR: Do not know how to split the result of
this operator!`
1. Importing onnx models through onnx importer in relay is supported. There
are some examples in hexagon contrib tests you can refer.
2. No. AIMET quantization is not supported in TVM
3. No, we don't support QNN through BYOC.
4. That sounds like an error in LLVM lowering, which needs to be fixed in
LLVM. Please post a separate issue with steps to reproduce and we can try to
fix it.
Let me know if it's okay to close this issue as you figured out the fix.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]