argrento opened a new pull request #9811:
URL: https://github.com/apache/tvm/pull/9811
- Issue
`debug_executor` workflow described in
https://tvm.apache.org/docs/arch/debugger.html#how-to-use-debugger has strange
behavior. When I used code from the step 4, the result of inference differs
from time to time. After a short investigation, I found that the `params` are
not set correctly. Code that I used:
```python
onnx_model = onnx.load("./resnet50-v2-7.onnx")
mod, params = relay.frontend.from_onnx(onnx_model, {'data': (1, 3, 224,
224)})
with tvm.transform.PassContext(opt_level=3):
lib = relay.build(mod, target="llvm", params=params)
lib_name = "lib.so"
lib.export_library(lib_name)
loaded_lib = tvm.runtime.load_module(lib_name)
m = graph_executor.create(loaded_lib["get_graph_json"](), loaded_lib, dev,
dump_root="/tmp/tvmdbg")
m.set_input('data', tvm.nd.array(data.astype(dtype)))
m.set_input(**params)
m.run()
tvm_out = m.get_output(0, tvm.nd.empty(out_shape, dtype)).numpy()
- Solution
- Implementation of `get_graph_params` function, which allows to get
params directly from the lib file. Thus a single line can be added to the
debugger example code:
```python
lib = tvm.runtime.load_module("network.so")
params = lib['get_graph_params']() # <-----
m = graph_executor.create(lib["get_graph_json"](), lib, dev,
dump_root="/tmp/tvmdbg")
# set inputs
m.set_input('data', tvm.nd.array(data.astype(dtype)))
m.set_input(**params)
# execute
m.run()
tvm_out = m.get_output(0, tvm.nd.empty(out_shape, dtype)).numpy()
```
After this the result of inference will be correct.
- As a additional mark, special warning was added. This warning will be
shown when a developer tries to call `run()` before setting inputs and params,
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]