YuboZhaoo opened a new issue, #16354:
URL: https://github.com/apache/tvm/issues/16354

   ### Expected behavior
   
   The following conv2d_transpose program with non-default kernel_layout  
should be calculated successfully
   ```
   x0 = relay.var('x0', shape=(6, 3, 1, 1))
   x1 = relay.var('x1', shape=(1, 3, 1, 1))
   output = relay.nn.conv2d_transpose(x0, x1, kernel_layout="OIWH")
   ```
   
   
   If all elements of x0 and x1 are 1, the output should be as follows:
   ```
   [[[[3.]]]  [[[3.]]]  [[[3.]]]  [[[3.]]]  [[[3.]]]  [[[3.]]]]
   ```
   
   ### Actual behavior
   A check failed during runtime:
   ```
   tvm.error.InternalError: Traceback (most recent call last):
     2: tvm::runtime::GraphExecutor::Run()
     1: std::_Function_handler<void (), 
tvm::runtime::GraphExecutor::CreateTVMOp(tvm::runtime::TVMOpParam const&, 
std::vector<DLTensor*, std::allocator<DLTensor*> > 
const&)::{lambda()#3}>::_M_invoke(std::_Any_data const&)
     0: 
tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::WrapPackedFunc(int
 (*)(TVMValue*, int*, int, TVMValue*, int*, void*), 
tvm::runtime::ObjectPtr<tvm::runtime::Object> 
const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}> 
>::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, 
tvm::runtime::TVMRetValue*)
     File 
"/home/yuboz/work/Programming/Python/tvm14/tvm/src/runtime/library_module.cc", 
line 76
   InternalError: Check failed: ret == 0 (-1 vs. 0) : Assert fail: 
T.Cast("int32", 
tvmgen_default_fused_nn_conv2d_transpose_output_unpack_shape[1]) == 3, Argument 
tvmgen_default_fused_nn_conv2d_transpose.output_unpack.shape[1] has an 
unsatisfied constraint: 3 == T.Cast("int32", 
tvmgen_default_fused_nn_conv2d_transpose_output_unpack_shape[1])
   ```
   
   ### Environment
   
   - Ubuntu 22.04
   - TVM 0.15.0
   
   ### Steps to reproduce
   This problem occurs with all non-default kernel_layouts and only occurs when 
the opt_level is set to 0. It's so wierd
   ```
   import tvm
   from tvm import relay, transform, IRModule
   from tvm.contrib import graph_executor
   import numpy as np
   
   x0 = relay.var('x0', shape=(6, 3, 1, 1))
   x1 = relay.var('x1', shape=(1, 3, 1, 1))
   output = relay.nn.conv2d_transpose(x0, x1, kernel_layout="OIWH")
   mod = IRModule.from_expr(output)
   print(mod)
   with transform.PassContext(opt_level=0):
       lib = relay.build(mod, target='llvm')
   
   dev = tvm.cpu(0)
   m = graph_executor.GraphModule(lib["default"](dev))
   m.set_input("x0", np.ones((6, 3, 1, 1)))
   m.set_input("x1", np.ones((1, 3, 1, 1)))
   m.run()
   tvm_output = m.get_output(0)
   print(tvm_output)
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to