ConvolutedDog commented on issue #18136: URL: https://github.com/apache/tvm/issues/18136#issuecomment-3067172263
With the changes in PR https://github.com/apache/tvm/pull/18143, the following script works as expected. <details> <summary>Click here for code</summary> ```py import sys import numpy as np import onnx import onnxruntime import tvm from tvm import relax from tvm.relax.frontend.onnx import from_onnx import pickle def main(): onnx_model = onnx.load("111.onnx") with open( "inputs.pkl", "rb", ) as fp: inputs = pickle.load(fp) try: ort_session = onnxruntime.InferenceSession( onnx_model.SerializeToString(), providers=["CPUExecutionProvider"] ) ort_output = ort_session.run([], inputs) except Exception as e: print(e) sys.exit(1) print("inputs:\n", inputs) # Convert the onnx model into relax through the onnx importer. tvm_model = from_onnx(onnx_model, keep_params_in_input=True) tvm_model.show() tvm_model, params = relax.frontend.detach_params(tvm_model) input_list = [ tvm.nd.array(inputs[key.name_hint]) for key in tvm_model["main"].params if key.name_hint in inputs ] if params: input_list += params["main"] ex = relax.build(tvm_model, target="llvm") vm = relax.VirtualMachine(ex, tvm.cpu()) nd_res = vm["main"](*input_list) print("ONNX output:\n", ort_output) print("TVM output:\n", nd_res) if __name__ == "__main__": main() ``` </details> -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
