alexandrepires5 opened a new issue, #11780:
URL: https://github.com/apache/tvm/issues/11780
Hello there,
So, I have been having this issue trying to autotune a Yolox nano
model(416x416). I am playing around with the autotune configuration using the
python API, but I get this runtime error:
RuntimeError: Invalid type of axis: <class 'tvm.tir.expr.SizeVar'>
The code I have is the following:
`
from tvm.autotvm.tuner import XGBTuner
from tvm import autotvm
import tvm.relay as relay
import onnx
number = 10
repeat = 1
min_repeat_ms = 0 # since we're tuning on a CPU, can be set to 0
timeout = 10 # in seconds
input_name = "inputs"
shape_dict = {input_name: (1, 3, 416, 416)}
model_path = "yolox_nano_416x416.onnx"
onnx_model = onnx.load(model_path)
target = "llvm"
mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)
# create a TVM runner
runner = autotvm.LocalRunner(
number=number,
repeat=repeat,
timeout=timeout,
min_repeat_ms=min_repeat_ms,
enable_cpu_cache_flush=True,
)
tuning_option = {
"tuner": "xgb",
"trials": 200, # when deploying to production, this should be at least
100x higher.
"early_stopping": 100,
"measure_option": autotvm.measure_option(
builder=autotvm.LocalBuilder(build_func="default"), runner=runner
),
"tuning_records": "yolox-nano-416-autotuning.json",
}
# begin by extracting the tasks from the onnx model
tasks = autotvm.task.extract_from_program(mod["main"], target=target,
params=params)
# Tune the extracted tasks sequentially.
for i, task in enumerate(tasks):
prefix = "[Task %2d/%2d] " % (i + 1, len(tasks))
tuner_obj = XGBTuner(task, loss_type="rank")
tuner_obj.tune(
n_trial=min(tuning_option["trials"], len(task.config_space)),
early_stopping=tuning_option["early_stopping"],
measure_option=tuning_option["measure_option"],
callbacks=[
autotvm.callback.progress_bar(tuning_option["trials"],
prefix=prefix),
autotvm.callback.log_to_file(tuning_option["tuning_records"]),
],
)`
The following are my logs:
`
[22:03:06]
/home/conda/feedstock_root/build_artifacts/libtvm_1648135735928/work/src/te/schedule/bound.cc:119:
not in feed graph consumer = hybrid(_conv_shape_func_nchw, 0x7f6b08419570)
[22:03:06]
/home/conda/feedstock_root/build_artifacts/libtvm_1648135735928/work/src/te/schedule/bound.cc:119:
not in feed graph consumer = hybrid(_conv_shape_func_nchw, 0x7f6b083ef5f0)
[22:03:06]
/home/conda/feedstock_root/build_artifacts/libtvm_1648135735928/work/src/te/schedule/bound.cc:119:
not in feed graph consumer = hybrid(_conv_shape_func_nchw, 0x7f6b08182070)
[22:03:06]
/home/conda/feedstock_root/build_artifacts/libtvm_1648135735928/work/src/te/schedule/bound.cc:119:
not in feed graph consumer = hybrid(_conv_shape_func_nchw, 0x7f6b08229620)
[22:03:06]
/home/conda/feedstock_root/build_artifacts/libtvm_1648135735928/work/src/te/schedule/bound.cc:119:
not in feed graph consumer = hybrid(_conv_shape_func_nchw, 0x7f6b08416820)
[22:03:06]
/home/conda/feedstock_root/build_artifacts/libtvm_1648135735928/work/src/te/schedule/bound.cc:119:
not in feed graph consumer = hybrid(_conv_shape_func_nchw, 0x7f6b082273f0)
[22:03:06]
/home/conda/feedstock_root/build_artifacts/libtvm_1648135735928/work/src/te/schedule/bound.cc:119:
not in feed graph consumer = hybrid(_conv_shape_func_nchw, 0x7f6b081121a0)
[22:03:06]
/home/conda/feedstock_root/build_artifacts/libtvm_1648135735928/work/src/te/schedule/bound.cc:119:
not in feed graph consumer = hybrid(_conv_shape_func_nchw, 0x7f6b0842a0a0)
[22:03:06]
/home/conda/feedstock_root/build_artifacts/libtvm_1648135735928/work/src/te/schedule/bound.cc:119:
not in feed graph consumer = hybrid(_conv_shape_func_nchw, 0x7f6b08268f20)
[22:03:06]
/home/conda/feedstock_root/build_artifacts/libtvm_1648135735928/work/src/te/schedule/bound.cc:119:
not in feed graph consumer = hybrid(_conv_shape_func_nchw, 0x7f6b084181e0)
[22:03:06]
/home/conda/feedstock_root/build_artifacts/libtvm_1648135735928/work/src/te/schedule/bound.cc:119:
not in feed graph consumer = hybrid(_conv_shape_func_nchw, 0x7f6b08269170)
[22:03:06]
/home/conda/feedstock_root/build_artifacts/libtvm_1648135735928/work/src/te/schedule/bound.cc:119:
not in feed graph consumer = hybrid(_conv_shape_func_nchw, 0x7f6b0840b470)
[22:03:06]
/home/conda/feedstock_root/build_artifacts/libtvm_1648135735928/work/src/te/schedule/bound.cc:119:
not in feed graph consumer = hybrid(_conv_shape_func_nchw, 0x7f6b0842e0f0)
[22:03:06]
/home/conda/feedstock_root/build_artifacts/libtvm_1648135735928/work/src/te/schedule/bound.cc:119:
not in feed graph consumer = hybrid(_conv_shape_func_nchw, 0x7f6b0843a4f0)
[22:03:06]
/home/conda/feedstock_root/build_artifacts/libtvm_1648135735928/work/src/te/schedule/bound.cc:119:
not in feed graph consumer = hybrid(_conv_shape_func_nchw, 0x7f6b0843fda0)
[22:03:06]
/home/conda/feedstock_root/build_artifacts/libtvm_1648135735928/work/src/te/schedule/bound.cc:119:
not in feed graph consumer = hybrid(_conv_shape_func_nchw, 0x7f6b0843faf0)
[22:03:23]
/home/conda/feedstock_root/build_artifacts/libtvm_1648135735928/work/src/te/schedule/bound.cc:119:
not in feed graph consumer = hybrid(_conv_shape_func_nchw, 0x7f6b08c696e0)
Exception in thread Thread-1:
Traceback (most recent call last):
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/threading.py", line
932, in _bootstrap_inner
self.run()
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/threading.py", line
870, in run
self._target(*self._args, **self._kwargs)
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/autotvm/task/relay_integration.py",
line 55, in _lower
compiler.lower(mod, target=target)
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/relay/backend/vm.py",
line 155, in lower
self._lower(mod, target, target_host)
File "tvm/_ffi/_cython/./packed_func.pxi", line 323, in
tvm._ffi._cy3.core.PackedFuncBase.__call__
File "tvm/_ffi/_cython/./packed_func.pxi", line 257, in
tvm._ffi._cy3.core.FuncCall
File "tvm/_ffi/_cython/./packed_func.pxi", line 246, in
tvm._ffi._cy3.core.FuncCall3
File "tvm/_ffi/_cython/./base.pxi", line 163, in tvm._ffi._cy3.core.CALL
tvm._ffi.base.TVMError: Traceback (most recent call last):
26: TVMFuncCall
25: std::_Function_handler<void (tvm::runtime::TVMArgs,
tvm::runtime::TVMRetValue*),
tvm::relay::vm::VMCompiler::GetFunction(std::__cxx11::basic_string<char,
std::char_traits<char>, std::allocator<char> > const&,
tvm::runtime::ObjectPtr<tvm::runtime::Object>
const&)::{lambda(tvm::runtime::TVMArgs,
tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&,
tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
24: tvm::relay::vm::VMCompiler::Lower(tvm::IRModule,
tvm::runtime::Map<tvm::Integer, tvm::Target, void, void> const&, tvm::Target
const&)
23: tvm::relay::vm::VMFunctionCompiler::Compile(tvm::GlobalVar const&,
tvm::relay::Function const&)
22: tvm::relay::transform::DeviceAwareExprFunctor<void (tvm::RelayExpr
const&)>::VisitExpr_(tvm::relay::FunctionNode const*)
21:
tvm::relay::vm::VMFunctionCompiler::DeviceAwareVisitExpr_(tvm::relay::FunctionNode
const*)
20: tvm::relay::transform::DeviceAwareExprFunctor<void (tvm::RelayExpr
const&)>::VisitExpr_(tvm::relay::LetNode const*)
19:
tvm::relay::vm::VMFunctionCompiler::PreVisitLetBinding_(tvm::relay::Var const&,
tvm::RelayExpr const&)
18: tvm::relay::transform::DeviceAwareExprFunctor<void (tvm::RelayExpr
const&)>::VisitExpr_(tvm::relay::CallNode const*)
17: tvm::relay::transform::DeviceAwareExprFunctor<void (tvm::RelayExpr
const&)>::VisitExpr_(tvm::relay::CallNode const*)
16:
tvm::relay::vm::VMFunctionCompiler::DeviceAwareVisitExpr_(tvm::relay::CallNode
const*)
15: std::_Function_handler<void (tvm::runtime::Array<tvm::RelayExpr, void>
const&, tvm::Attrs const&, tvm::runtime::Array<tvm::Type, void> const&),
tvm::relay::vm::VMFunctionCompiler::DeviceAwareVisitExpr_(tvm::relay::CallNode
const*)::{lambda(tvm::runtime::Array<tvm::RelayExpr, void> const&, tvm::Attrs
const&, tvm::runtime::Array<tvm::Type, void>
const&)#1}>::_M_invoke(std::_Any_data const&,
tvm::runtime::Array<tvm::RelayExpr, void> const&, tvm::Attrs const&,
tvm::runtime::Array<tvm::Type, void> const&)
14:
tvm::relay::vm::VMFunctionCompiler::EmitInvokeTVMOp(tvm::relay::Function
const&, tvm::RelayExpr const&, tvm::RelayExpr const&)
13: tvm::relay::tec::TECompilerImpl::Lower(tvm::relay::tec::CCacheKey
const&, std::function<tvm::runtime::String (tvm::runtime::String)>)
12:
tvm::relay::tec::TECompilerImpl::LowerInternal(tvm::relay::tec::CCacheKey
const&, std::function<tvm::runtime::String (tvm::runtime::String)>)
11: tvm::relay::tec::PrimFuncFor(tvm::relay::Function const&, tvm::Target
const&, std::function<std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > (std::__cxx11::basic_string<char,
std::char_traits<char>, std::allocator<char> >)>)
10: tvm::relay::tec::ScheduleBuilder::Create(tvm::relay::Function const&,
std::function<std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > (std::__cxx11::basic_string<char,
std::char_traits<char>, std::allocator<char> >)>)
9:
tvm::relay::backend::MemoizedExprTranslator<tvm::runtime::Array<tvm::te::Tensor,
void> >::VisitExpr(tvm::RelayExpr const&)
8: _ZZN3tvm5relay11ExprFunctorIFNS_7runtime5ArrayINS_2te6TensorEvEERKNS_9
7: tvm::relay::tec::ScheduleBuilder::VisitExpr_(tvm::relay::CallNode
const*)
6:
tvm::relay::backend::MemoizedExprTranslator<tvm::runtime::Array<tvm::te::Tensor,
void> >::VisitExpr(tvm::RelayExpr const&)
5: _ZZN3tvm5relay11ExprFunctorIFNS_7runtime5ArrayINS_2te6TensorEvEERKNS_9
4: tvm::relay::tec::ScheduleBuilder::VisitExpr_(tvm::relay::CallNode
const*)
3:
tvm::relay::backend::MemoizedExprTranslator<tvm::runtime::Array<tvm::te::Tensor,
void> >::VisitExpr(tvm::RelayExpr const&)
2: _ZZN3tvm5relay11ExprFunctorIFNS_7runtime5ArrayINS_2te6TensorEvEERKNS_9
1: tvm::relay::tec::ScheduleBuilder::VisitExpr_(tvm::relay::CallNode
const*)
0: std::_Function_handler<void (tvm::runtime::TVMArgs,
tvm::runtime::TVMRetValue*),
TVMFuncCreateFromCFunc::{lambda(tvm::runtime::TVMArgs,
tvm::runtime::TVMRetValue*)#2}>::_M_invoke(std::_Any_data const&,
tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&) [clone .cold]
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in
tvm._ffi._cy3.core.tvm_callback
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/relay/backend/te_compiler.py",
line 314, in lower_call
best_impl, outputs = select_implementation(
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/relay/backend/te_compiler.py",
line 189, in select_implementation
outs = best_plevel_impl.compute(attrs, inputs, out_type)
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/relay/op/op.py",
line 126, in compute
return _OpImplementationCompute(self, attrs, inputs, out_type)
File "tvm/_ffi/_cython/./packed_func.pxi", line 323, in
tvm._ffi._cy3.core.PackedFuncBase.__call__
File "tvm/_ffi/_cython/./packed_func.pxi", line 267, in
tvm._ffi._cy3.core.FuncCall
File "tvm/_ffi/_cython/./base.pxi", line 163, in tvm._ffi._cy3.core.CALL
3: TVMFuncCall
2: std::_Function_handler<void (tvm::runtime::TVMArgs,
tvm::runtime::TVMRetValue*),
tvm::relay::__mk_TVM6::{lambda(tvm::runtime::TVMArgs,
tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&,
tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
1: tvm::relay::OpImplementation::Compute(tvm::Attrs const&,
tvm::runtime::Array<tvm::te::Tensor, void> const&, tvm::Type const&)
0: std::_Function_handler<void (tvm::runtime::TVMArgs,
tvm::runtime::TVMRetValue*),
TVMFuncCreateFromCFunc::{lambda(tvm::runtime::TVMArgs,
tvm::runtime::TVMRetValue*)#2}>::_M_invoke(std::_Any_data const&,
tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&) [clone .cold]
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in
tvm._ffi._cy3.core.tvm_callback
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/relay/op/strategy/generic.py",
line 243, in _compute_conv2d
return [topi_compute(*args)]
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/topi/x86/conv2d.py",
line 129, in conv2d_nchw
packed_out = conv2d_NCHWc(data, kernel, strides, padding, dilation,
layout, layout, out_dtype)
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/autotvm/task/topi_integration.py",
line 165, in wrapper
node = topi_compute(cfg, *args)
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/topi/x86/conv2d.py",
line 194, in conv2d_NCHWc
cfg.define_split("tile_ic", in_channel, num_outputs=2)
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/autotvm/task/space.py",
line 730, in define_split
return self._add_new_transform(SplitSpace, name, axes, policy, **kwargs)
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/autotvm/task/space.py",
line 832, in _add_new_transform
axes = [x if isinstance(x, (VirtualAxis, Axis)) else self.axis(x) for x
in axes]
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/autotvm/task/space.py",
line 832, in <listcomp>
axes = [x if isinstance(x, (VirtualAxis, Axis)) else self.axis(x) for x
in axes]
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/autotvm/task/space.py",
line 687, in axis
return VirtualAxis(var)
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/autotvm/task/space.py",
line 141, in __init__
raise RuntimeError("Invalid type of axis: " + str(type(var)))
RuntimeError: Invalid type of axis: <class 'tvm.tir.expr.SizeVar'>
Traceback (most recent call last):
File
"/home/alex/Documents/FreelanceWork/PeopleCounting/tracking-model/tune_model.py",
line 41, in <module>
tasks = autotvm.task.extract_from_program(mod["main"], target=target,
params=params)
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/autotvm/task/relay_integration.py",
line 87, in extract_from_program
return extract_from_multiple_program([mod], [params], target, ops=ops)
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/autotvm/task/relay_integration.py",
line 153, in extract_from_multiple_program
tsk = create(task_name, args, target=target)
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/autotvm/task/task.py",
line 485, in create
sch, _ = ret.func(*args)
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/autotvm/task/task.py",
line 240, in __call__
return self._default_func(*args, **kwargs)
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/autotvm/task/task.py",
line 246, in _default_func
out = self.fcompute(*args, **kwargs)
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/autotvm/task/topi_integration.py",
line 165, in wrapper
node = topi_compute(cfg, *args)
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/topi/x86/conv2d.py",
line 194, in conv2d_NCHWc
cfg.define_split("tile_ic", in_channel, num_outputs=2)
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/autotvm/task/space.py",
line 730, in define_split
return self._add_new_transform(SplitSpace, name, axes, policy, **kwargs)
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/autotvm/task/space.py",
line 832, in _add_new_transform
axes = [x if isinstance(x, (VirtualAxis, Axis)) else self.axis(x) for x
in axes]
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/autotvm/task/space.py",
line 832, in <listcomp>
axes = [x if isinstance(x, (VirtualAxis, Axis)) else self.axis(x) for x
in axes]
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/autotvm/task/space.py",
line 687, in axis
return VirtualAxis(var)
File
"/home/alex/miniconda3/envs/tracking-model/lib/python3.8/site-packages/tvm/autotvm/task/space.py",
line 141, in __init__
raise RuntimeError("Invalid type of axis: " + str(type(var)))
RuntimeError: Invalid type of axis: <class 'tvm.tir.expr.SizeVar'>
Process finished with exit code 1
`
My version of TVM is 0.8.0, I have a ryzen 2400g with a GTX 1070.
Is there something I am doing wrong or it this a TVM issue? Issue #10042
seemed to have the same problem, but I want to avoid using the workaround. Is
there any clean way of fixing this?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]