ry3s opened a new issue, #15931:
URL: https://github.com/apache/tvm/issues/15931

   Thanks for participating in the TVM community! We use https://discuss.tvm.ai 
for any general usage questions and discussions. The issue tracker is used for 
actionable items such as feature proposals discussion, roadmaps, and bug 
tracking.  You are always welcomed to post on the forum first :smile_cat:
   
   Issues that are inactive for a period of time may get closed. We adopt this 
policy so that we won't lose track of actionable issues that may fall at the 
bottom of the pile. Feel free to reopen a new one if you feel there is an 
additional problem that needs attention when an old one gets closed.
   
   ### Expected behavior
   
   Succeed in `FakeQuantizationToInteger`.
   
   ### Actual behavior
   
   Segumentation fault.
   
   ### Environment
   
   * TVM: 
[fa4aeee64efbd55db1f502a220276f9851c52f15](https://github.com/apache/tvm/tree/fa4aeee64efbd55db1f502a220276f9851c52f15)
   * onnx: 1.13.1
   * onnxruntime-gpu: 1.14.1
   * torch: 2.1.0
   
   ### Steps to reproduce
   
   ```python
   import numpy as np
   import onnx
   import torch
   from onnxruntime.quantization import CalibrationDataReader, quantize_static, 
QuantFormat
   from torch import nn
   
   import tvm
   from tvm import relay
   
   INPUT_SHAPE = (1, 24, 24, 3)
   
   
   def write_relay_to_txt(obj, filename):
       with open(filename, "w") as f:
           f.write(relay.astext(obj, show_meta_data=False))
   
   
   class RandomDataReader(CalibrationDataReader):
       def __init__(self) -> None:
           super().__init__()
   
           self.i = 0
   
       def get_next(self) -> dict | None:
           if self.i >= 100:
               return None
           self.i += 1
           return {"input": np.random.rand(*INPUT_SHAPE).astype(np.float32)}
   
   
   def main():
       model = nn.Linear(3, 3, bias=False)
       input = torch.rand(INPUT_SHAPE, dtype=torch.float32)
       torch.onnx.export(model, input, "linear.onnx", input_names=["input"], 
output_names=["output"])
   
       reader = RandomDataReader()
       quantize_static(
           "linear.onnx", "linear_quant.onnx", reader, per_channel=True, 
quant_format=QuantFormat.QDQ
       )
   
       onnx_model = onnx.load("linear_quant.onnx")
       mod, param = relay.frontend.from_onnx(onnx_model, shape={"input": 
INPUT_SHAPE})
       write_relay_to_txt(mod, "onnx_relay.txt")
       with tvm.transform.PassContext(opt_level=2):
           mod = relay.transform.InferType()(mod)
           mod = relay.transform.SimplifyInference()(mod)
           mod = relay.transform.FakeQuantizationToInteger()(mod)
           write_relay_to_txt(mod, "relay.txt")
   
       lib = relay.build(mod, target="llvm", params=param)
       lib.export_library("linear_quant.tar")
   
   
   if __name__ == "__main__":
       main()
   ```
   ### Triage
   
   Please refer to the list of label tags 
[here](https://github.com/apache/tvm/wiki/Issue-Triage-Labels) to find the 
relevant tags and add them below in a bullet format (example below).
   
   * needs-triage
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to