mkroening opened a new issue #9657:
URL: https://github.com/apache/tvm/issues/9657
### Expected behavior
No issue when running
```console
tvmc tune --target llvm --output autotuner_records.json model.onnx
```
with the ONNX model only being an addition of two single values.
Note that compiling the model without tuning works and produces valid
results.
Model:
```onnx
ir_version: 8
graph {
node {
input: "A"
input: "B"
output: "C"
op_type: "Add"
}
name: "test-model"
input {
name: "A"
type {
tensor_type {
elem_type: 1
shape {
dim {
dim_value: 1
}
}
}
}
}
input {
name: "B"
type {
tensor_type {
elem_type: 1
shape {
dim {
dim_value: 1
}
}
}
}
}
output {
name: "C"
type {
tensor_type {
elem_type: 1
shape {
dim {
dim_value: 1
}
}
}
}
}
}
opset_import {
version: 15
}
```
### Actual behavior
```
# [..]
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/mkroening/Development/tvm/python/tvm/driver/tvmc/__main__.py",
line 24, in <module>
tvmc.main.main()
File "/home/mkroening/Development/tvm/python/tvm/driver/tvmc/main.py",
line 100, in main
sys.exit(_main(sys.argv[1:]))
File "/home/mkroening/Development/tvm/python/tvm/driver/tvmc/main.py",
line 93, in _main
return args.func(args)
File
"/home/mkroening/Development/tvm/python/tvm/driver/tvmc/autotuner.py", line
266, in drive_tune
tune_model(
File
"/home/mkroening/Development/tvm/python/tvm/driver/tvmc/autotuner.py", line
472, in tune_model
trials = int(trials / len(tasks))
ZeroDivisionError: division by zero
```
### Environment
Operating System: Ubuntu 20.04.3 LTS
TVM version: ccd59e89d21cc81cc06f2a16cddcc1ffeed1e2a1
### Steps to reproduce
Create `model.onnx` with:
```python
import onnx
from onnx import helper
from onnx import TensorProto
l = 1
A = helper.make_tensor_value_info('A', TensorProto.FLOAT, [l])
B = helper.make_tensor_value_info('B', TensorProto.FLOAT, [l])
C = helper.make_tensor_value_info('C', TensorProto.FLOAT, [l])
node_def = helper.make_node(
'Add', # name
['A', 'B'], # inputs
['C'], # outputs
)
graph_def = helper.make_graph(
[node_def], # nodes
'test-model', # name
[A, B], # inputs
[C], # outputs
)
model_def = helper.make_model(graph_def)
print('The model is:\n{}'.format(model_def))
onnx.checker.check_model(model_def)
print('The model is checked!')
onnx.save(model_def, 'model.onnx')
```
Thanks a lot for your help! :)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]