vanjuan339 opened a new issue, #17705:
URL: https://github.com/apache/tvm/issues/17705

   Thanks for participating in the TVM community! We use https://discuss.tvm.ai 
for any general usage questions and discussions. The issue tracker is used for 
actionable items such as feature proposals discussion, roadmaps, and bug 
tracking.  You are always welcomed to post on the forum first :smile_cat:
   
   Issues that are inactive for a period of time may get closed. We adopt this 
policy so that we won't lose track of actionable issues that may fall at the 
bottom of the pile. Feel free to reopen a new one if you feel there is an 
additional problem that needs attention when an old one gets closed.
   
   ### Expected behavior
   
   Want to quantize a custom model
   
   ### Actual behavior
   
   Calling interface:mod = relay.quantize.quantize(mod, params=params, 
dataset=dataset) 
   The following error occurs:
   3: tvm::relay::TypeInferencer::Infer(tvm::GlobalVar, tvm::relay::Function)
   2: tvm::relay::TypeSolver::Solve()
   
1:tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<bool
 (tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, 
tvm::TypeReporter const&)>::AssignTypedLambda<bool 
(*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, 
tvm::TypeReporter const&)>(bool (*)(tvm::runtime::Array<tvm::Type, void> 
const&, int, tvm::Attrs const&, tvm::TypeReporter 
const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> 
>::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, 
tvm::runtime::TVMRetValue*)
   0: tvm::relay::quantize::SimulatedQuantizeRel(tvm::runtime::Array<tvm::Type, 
void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)
   File "/tvm/src/relay/quantize/quantize.cc", line 52
   InternalError: Check failed: data->shape.size() != 0 (0 vs. 0) : Input shape 
cannot be empty
   
   ### Environment
   
   x86_64 GNU/Linux
   tvm-0.19.0-py3.9
   
   ### Steps to reproduce
   
   '''
   onnx_model_path = "./model.onnx"
   onnx_model = onnx.load(onnx_model_path)
   input_name = "image"
   input_shape = (1,3,32,900)
   mod, params = relay.frontend.from_onnx(onnx_model, 
shape={input_name:input_shape})
   
   dataset = [{"image": np.random.randn(1,3,32,900).astype("float32")} for _ in 
range(100)]
   
   with tvm.transform.PassContext(opt_level=3):
       with relay.quantize.qconfig(
           calibrate_mode="kl_divergence",
           weight_scale="max",
           skip_conv_layers=[],
           skip_dense_layer=False
       ):
           quantized_mod = relay.quantize.quantize(mod, params, dataset=dataset)
   quantized_mod.show()
   '''
   
   ### Triage
   
   Please refer to the list of label tags 
[here](https://github.com/apache/tvm/wiki/Issue-Triage-Labels) to find the 
relevant tags and add them below in a bullet format (example below).
   
   * needs-triage
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to