Cookiee235 opened a new issue, #17207:
URL: https://github.com/apache/tvm/issues/17207

   Hi all, I found the inference results before and after using pass 
`LiftTransformParams` are different.
   The bug could be in the implementation of 
`relax.transform.LiftTransformParams()`.
   
   
   ### Expected behavior
   
   The transform `LiftTransformParams` should not change the inference results 
of the given model.
   
   ### Actual behavior
   
   ```
   Traceback (most recent call last):
     File "/share_container/optfuzz/res/bugs/27_wrong.py", line 42, in <module>
       np.testing.assert_allclose(before_outputs, after_outputs, rtol=1e-5, 
atol=1e-5)
     File 
"/root/.local/lib/python3.12/site-packages/numpy/testing/_private/utils.py", 
line 1684, in assert_allclose
       assert_array_compare(compare, actual, desired, err_msg=str(err_msg),
     File "/root/miniconda3/lib/python3.12/contextlib.py", line 81, in inner
       return func(*args, **kwds)
              ^^^^^^^^^^^^^^^^^^^
     File 
"/root/.local/lib/python3.12/site-packages/numpy/testing/_private/utils.py", 
line 885, in assert_array_compare
       raise AssertionError(msg)
   AssertionError:
   Not equal to tolerance rtol=1e-05, atol=1e-05
   
   Mismatched elements: 16 / 16 (100%)
   Max absolute difference among violations: 10
   Max relative difference among violations: 0.5
    ACTUAL: array([24,  5, 16, 70, 72, 90, 42, 32, 50, 49, 24, 15, 18,  8,  4, 
40],
         dtype=int32)
    DESIRED: array([16,  4, 14, 63, 63, 80, 36, 28, 40, 42, 16, 10, 15,  7,  0, 
36],
         dtype=int32)
   
   ```
   
   ### Environment
   
   * TVM: 0.17.dev0
   
   ### Steps to reproduce
   ```
   import tvm
   from tvm import relax
   import numpy as np
   
   from tvm.script import ir as I
   from tvm.script import relax as R
   
   @I.ir_module
   class Module:
       @R.function
       def main(A: R.Tensor((16,), dtype="int32"), B: R.Tensor((16,), 
dtype="int32")) -> R.Tensor((16,), dtype="int32"):
           R.func_attr({"num_input": 1})
           cls = Module
           with R.dataflow():
               offset = R.ones(R.shape([16]), dtype="int32")
               A_offset = R.add(A, offset)
               B_offset = R.add(B, offset)
               output = R.multiply(A_offset, B_offset)
               R.output(output)
           return output
   
   def compile_mod(mod, target_func, *inputs):
       mod = relax.transform.FuseTIR()(mod)
       mod = relax.transform.LambdaLift()(mod)
       ex = relax.build(mod, target='llvm')
       vm = relax.VirtualMachine(ex, tvm.cpu())
       mod_outputs = vm[f'{target_func}'](*inputs).numpy()
       return mod_outputs
   
   
   mod = Module
   mod = tvm.relax.transform.LegalizeOps()(mod)
   
   input_0 = tvm.nd.array(np.random.randint(10, size=[16]).astype('int32'))
   input_1 = tvm.nd.array(np.random.randint(10, size=[16]).astype('int32'))
   before_outputs = compile_mod(mod, 'main', input_0,input_1,)
   
   mod = relax.transform.LiftTransformParams()(mod)  # output wrong res due to 
this transform
   after_outputs = compile_mod(mod, 'main', input_0, input_1,)
   
   np.testing.assert_allclose(before_outputs, after_outputs, rtol=1e-5, 
atol=1e-5)
   ```
   
   cc @Lunderberg @tqchen @vinx13 
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to