YuhengHuang42 commented on issue #7563:
URL: https://github.com/apache/tvm/issues/7563#issuecomment-811245830


   Hi, I'm interested in this bug and did some experiments. Here are some 
findings:
   
   1. If you do the FuseOps transform first on the relay graph, i.e.
   
   ```
   seq = tvm.transform.Sequential(
           [
               transform.SimplifyInference(),
               transform.FuseOps()
           ]
       )   
   with tvm.transform.PassContext(opt_level=opt_level):
       mod = seq(mod)
   # build the model
   # ...
   ```
   
   Then the final result is correct.
   
   2. If you use opt_level=4 to build the model, then the final result is also 
correct:
   
   ```
   opt_level = 4
   with tvm.transform.PassContext(opt_level=opt_level):
       lib = relay.build(mod, target='llvm', params=param)
   ```
   
   This seems pretty weird to me. So I disable some passes here, trying to dig 
deeper.
   
   ```
   disabled_pass = ["CombineParallelConv2D", "CombineParallelDense", 
"CombineParallelBatchMatmul", "FastMath"]
   opt_level = 4
   with tvm.transform.PassContext(opt_level=opt_level, 
disabled_pass=disabled_pass):
       lib = relay.build(mod, target='llvm', params=param)
   ```
   
   As far as I know, these four passes are the only ones that are enabled by 
default. However, disable these passes doesn't influence the final result: the 
result is still correct.
   
   My environment:
   
   Build from source at 2988a08e3ff4a8956ac9b23e662374f6d8f7f4d9,
   
   OS: macOS 10.15.7
   
   As I'm new to TVM, I'm stuck here and can't dig deeper at present. Hope 
these info can help you find the root cause of the bug.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to