w-tingting opened a new issue #7872:
URL: https://github.com/apache/tvm/issues/7872
Hi! I quantified a model with Pytorch, and I am stuck with a problem in the
process of transforming this model with TVM.The code is as follows:
```
inp = torch.rand(1, 3, 32, 32)
model = Net(args.batch_size, args.mc, args.kernel_type,
args.num_classes).to(device)
qmodel = quantization_model(model)
qmodel.eval()
qmodel.qconfig = torch.quantization.get_default_qconfig('fbgemm')
qmodel = torch.quantization.prepare(model, inplace=False)
# qmodel(inp)
qmodel = torch.quantization.convert(qmodel, inplace=False)
qmodel.load_state_dict(torch.load('CIFAR10_net_04-17-2021_17-10-42.pth'))
script_module = torch.jit.trace(qmodel, inp).eval()
input_name = "input" # the input name can be be arbitrary for PyTorch
frontend.
input_shapes = [(input_name, (1, 3, 32, 32))]
mod, params = relay.frontend.from_pytorch(script_module, input_shapes)
tvm_result, rt_mod = run_tvm_model(mod, params, input_name, inp,
target="gpu")
```
The problem may arise in the code ` mod, params =
relay.frontend.from_pytorch(script_module, input_shapes) `
The error is as follows:
```
The Relay type checker is unable to show the following types match.
In particular dimension 0 conflicts: 1200 does not match 400.
The Relay type checker is unable to show the following types match.
In particular `Tensor[(400), float32]` does not match `Tensor[(1200),
float32]`
note: run with `TVM_BACKTRACE=1` environment variable to display a backtrace.
```
As it said,I run it again after adding the `TVM_BACKTRACE=1` and I get the
error:
```
The Relay type checker is unable to show the following types match.
In particular dimension 0 conflicts: 1200 does not match 400.
The Relay type checker is unable to show the following types match.
In particular `Tensor[(400), float32]` does not match `Tensor[(1200),
float32]`
Traceback (most recent call last):
File "tvm_transform.py", line 84, in <module>
mod, params = relay.frontend.from_pytorch(script_module, input_shapes)
File "/home/wtt/tvm/python/tvm/relay/frontend/pytorch.py", line 3287, in
from_pytorch
ret = converter.convert_operators(_get_operator_nodes(graph.nodes()),
outputs, ret_name)[0]
File "/home/wtt/tvm/python/tvm/relay/frontend/pytorch.py", line 2711, in
convert_operators
self.record_output_type(relay_out)
File "/home/wtt/tvm/python/tvm/relay/frontend/pytorch.py", line 222, in
record_output_type
self.infer_type_with_prelude(output)
File "/home/wtt/tvm/python/tvm/relay/frontend/pytorch.py", line 170, in
infer_type_with_prelude
body = self.infer_type(val, self.prelude.mod)
File "/home/wtt/tvm/python/tvm/relay/frontend/pytorch.py", line 163, in
infer_type
new_mod = transform.InferType()(new_mod)
File "/home/wtt/tvm/python/tvm/ir/transform.py", line 127, in __call__
return _ffi_transform_api.RunPass(self, mod)
File "/home/wtt/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in
__call__
raise get_last_ffi_error()
tvm.error.DiagnosticError: Traceback (most recent call last):
5: TVMFuncCall
4: std::_Function_handler<void (tvm::runtime::TVMArgs,
tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::IRModule
(tvm::transform::Pass,
tvm::IRModule)>::AssignTypedLambda<tvm::transform::{lambda(tvm::transform::Pass,
tvm::IRModule)#10}>(tvm::transform::{lambda(tvm::transform::Pass,
tvm::IRModule)#10}, std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&,
tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&,
tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
3: tvm::transform::Pass::operator()(tvm::IRModule) const
2: tvm::transform::ModulePassNode::operator()(tvm::IRModule,
tvm::transform::PassContext const&) const
1: std::_Function_handler<void (tvm::runtime::TVMArgs,
tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::IRModule
(tvm::IRModule,
tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::InferType()::{lambda(tvm::IRModule,
tvm::transform::PassContext
const&)#1}>(tvm::relay::transform::InferType()::{lambda(tvm::IRModule,
tvm::transform::PassContext const&)#1})::{lambda(tvm::runtime::TVMArgs const&,
tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&,
tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
0: tvm::DiagnosticContext::Render()
File "/home/wtt/tvm/src/ir/diagnostic.cc", line 105
DiagnosticError: one or more error diagnostics were emitted, please check
diagnostic render for output.
```
How can I solve this problem? Looking forward to your replay. Thank you!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]