Cookiee235 opened a new issue, #17211:
URL: https://github.com/apache/tvm/issues/17211

   
   Hi all, I set `check_well_formed=True`  in the below Relax IR construction 
and can run `mod.show()`  to show the IR successfully. It seems the Relax IR 
passed the legitimacy checking. However, the compilation crashed when executing 
`ex = relax.build(mod, target='llvm')`. The crash message shows that 
   "**Argument 0 type mismatch: expected R.Tensor((16,), dtype="float32"), 
given R.Tuple(R.Tensor((16,), dtype="float32"))**"
   
   Based on my analysis, if we replace the code `gv1 = R.call_tir(cls.relu, 
(x), out_sinfo=R.Tensor((1, 512, 64, 64)))` (Line 26) with `gv1 = R.nn.relu(x)` 
(Line 27) or `gv1 = R.call_tir(cls.relu, (x,), out_sinfo=R.Tensor((1, 512, 64, 
64), dtype="float32"))` (Line 28), the script can run well.
   Even if the Relax IR constructor can convert `gv1 = R.nn.relu(x)` to full 
information with type based on the context, why didn't it complete the missing 
type for `gv1` (Line 26).
   
   To take a step back, if the Relax IR constructor cannot complete the missing 
information and we set `check_cell_formed=True` in the Relax IR construction, 
we should throw an exception early in `mod = Module` rather than 
`relax.build()`.  Early crashes will make the code more robust.
   
   BTW, I prefer the IR constructor can fill in missing information or correct 
the inconsistent constraints based on IRs' context.
   
   
   ### Actual behavior
   ```
   Traceback (most recent call last):
     File "demo_simple.py", line 26, in <module>
       ex = relax.build(mod, target='llvm')  # crash here!
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
     File "/software/tvm/python/tvm/relax/vm_build.py", line 335, in build
       mod = pipeline(mod)
             ^^^^^^^^^^^^^
     File "/software/tvm/python/tvm/ir/transform.py", line 238, in __call__
       return _ffi_transform_api.RunPass(self, mod)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
     File "/software/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 239, in 
__call__
       raise_last_ffi_error()
     File "/software/tvm/python/tvm/_ffi/base.py", line 481, in 
raise_last_ffi_error
       raise py_err
     File "/software/tvm/python/tvm/relax/pipeline.py", line 101, in _pipeline
       mod = seq(mod)
             ^^^^^^^^
     File "/software/tvm/python/tvm/ir/transform.py", line 238, in __call__
       return _ffi_transform_api.RunPass(self, mod)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
     File "/software/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 239, in 
__call__
       raise_last_ffi_error()
     File "/software/tvm/python/tvm/_ffi/base.py", line 481, in 
raise_last_ffi_error
       raise py_err
   tvm._ffi.base.TVMError: Traceback (most recent call last):
     38: 
tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule
 (tvm::transform::Pass, 
tvm::IRModule)>::AssignTypedLambda<tvm::transform::{lambda(tvm::transform::Pass,
 tvm::IRModule)#7}>(tvm::transform::{lambda(tvm::transform::Pass, 
tvm::IRModule)#7}, std::__cxx11::basic_string<char, std::char_traits<char>, 
std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, 
tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, 
tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
     37: tvm::transform::Pass::operator()(tvm::IRModule) const
     36: tvm::transform::Pass::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     35: tvm::transform::SequentialNode::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     34: tvm::transform::Pass::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     33: tvm::transform::ModulePassNode::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     32: _ZN3tvm7runtime13PackedFuncObj9ExtractorINS0_1
     31: tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::IRModule, 
tvm::transform::PassContext)>::AssignTypedLambda<tvm::relax::transform::CallTIRRewrite()::{lambda(tvm::IRModule,
 
tvm::transform::PassContext)#1}>(tvm::relax::transform::CallTIRRewrite()::{lambda(tvm::IRModule,
 tvm::transform::PassContext)#1})::{lambda(tvm::runtime::TVMArgs const&, 
tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, 
tvm::runtime::TVMRetValue*) const
     30: tvm::relax::CallTIRMutator::Run()
     29: tvm::relax::ExprMutator::VisitExpr(tvm::RelayExpr const&)
     28: tvm::relax::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr 
const&)>::VisitExpr(tvm::RelayExpr const&)
     27: 
_ZZN3tvm5relax11ExprFunctorIFNS_9RelayExprERKS2_EE10InitVTableEvENUlRKNS_7runtime9ObjectRef
     26: tvm::relax::ExprMutator::VisitExpr_(tvm::relax::FunctionNode const*)
     25: tvm::relax::ExprMutator::VisitWithNewScope(tvm::RelayExpr const&, 
tvm::runtime::Optional<tvm::runtime::Array<tvm::relax::Var, void> >)
     24: tvm::relax::ExprMutator::VisitExpr(tvm::RelayExpr const&)
     23: tvm::relax::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr 
const&)>::VisitExpr(tvm::RelayExpr const&)
     22: 
_ZZN3tvm5relax11ExprFunctorIFNS_9RelayExprERKS2_EE10InitVTableEvENUlRKNS_7runtime9ObjectRef
     21: tvm::relax::ExprMutator::VisitExpr_(tvm::relax::SeqExprNode const*)
     20: tvm::relax::ExprMutator::VisitBindingBlock(tvm::relax::BindingBlock 
const&)
     19: 
tvm::relax::ExprMutator::VisitBindingBlock_(tvm::relax::BindingBlockNode const*)
     18: tvm::relax::ExprMutator::VisitBinding(tvm::relax::Binding const&)
     17: tvm::relax::ExprMutator::VisitBinding_(tvm::relax::VarBindingNode 
const*)
     16: _ZZN3tvm5relax11ExprMutator22InitVisitBindingVTabl
     15: tvm::relax::ExprMutator::VisitBinding_(tvm::relax::VarBindingNode 
const*, tvm::relax::CallNode const*)
     14: tvm::relax::ExprMutator::VisitExpr(tvm::RelayExpr const&)
     13: tvm::relax::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr 
const&)>::VisitExpr(tvm::RelayExpr const&)
     12: 
_ZZN3tvm5relax11ExprFunctorIFNS_9RelayExprERKS2_EE10InitVTableEvENUlRKNS_7runtime9ObjectRef
     11: tvm::relax::CallTIRMutator::VisitExpr_(tvm::relax::CallNode const*)
     10: tvm::relax::BlockBuilderImpl::Emit(tvm::RelayExpr, 
tvm::runtime::String)
     9: tvm::relax::BlockBuilderImpl::Emit(tvm::RelayExpr, bool, 
tvm::runtime::String)
     8: tvm::relax::Normalizer::Normalize(tvm::RelayExpr const&)
     7: tvm::relax::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr 
const&)>::VisitExpr(tvm::RelayExpr const&)
     6: 
_ZZN3tvm5relax11ExprFunctorIFNS_9RelayExprERKS2_EE10InitVTableEvENUlRKNS_7runtime9ObjectRef
     5: non-virtual thunk to 
tvm::relax::Normalizer::VisitExpr_(tvm::relax::CallNode const*)
     4: tvm::relax::Normalizer::VisitExpr_(tvm::relax::CallNode const*)
     3: tvm::relax::Normalizer::InferStructInfo(tvm::relax::Call const&)
     2: tvm::relax::DeriveCallRetStructInfo(tvm::relax::FuncStructInfo const&, 
tvm::relax::Call const&, tvm::relax::BlockBuilder const&, tvm::arith::Analyzer*)
     1: tvm::relax::CallRetStructInfoDeriver::Derive(tvm::relax::FuncStructInfo 
const&, tvm::relax::Call const&, tvm::relax::BlockBuilder const&)
     0: tvm::relax::BlockBuilderImpl::ReportFatal(tvm::Diagnostic const&)
     File "/software/tvm/src/relax/ir/block_builder.cc", line 159
   TVMError: Argument 0 type mismatch: expected R.Tensor((16,), 
dtype="float32"), given R.Tuple(R.Tensor((16,), dtype="float32"))
   ```
   
   ### Environment
   * TVM: 0.17.dev0
   
   ### Steps to reproduce
   ```
   import tvm
   from tvm import relax
   from tvm.script import ir as I
   from tvm.script import tir as T
   from tvm.script import relax as R
   
   
   @I.ir_module(check_well_formed=True)
   class Module:
       @T.prim_func(private=True)
       #def relu(A: T.Buffer((T.int64(1), T.int64(512), T.int64(64), 
T.int64(64)), "float32"), B: T.Buffer((T.int64(1), T.int64(512), T.int64(64), 
T.int64(64)), "float32")):
       def relu(A: T.Buffer((T.int64(1), T.int64(512), T.int64(64), 
T.int64(64)), "float32"), B: T.Buffer((T.int64(1), T.int64(512), T.int64(64), 
T.int64(64)))):
           T.func_attr({"op_pattern": 0})
           # with T.block("root"):
           for i0, i1, i2, i3 in T.grid(T.int64(1), T.int64(512), T.int64(64), 
T.int64(64)):
               with T.block("relu"):
                   v_i0, v_i1, v_i2, v_i3 = T.axis.remap("SSSS", [i0, i1, i2, 
i3])
                   T.reads(A[v_i0, v_i1, v_i2, v_i3])
                   T.writes(B[v_i0, v_i1, v_i2, v_i3])
                   B[v_i0, v_i1, v_i2, v_i3] = T.max(A[v_i0, v_i1, v_i2, v_i3], 
T.float32(0))
   
       @R.function
       def main(x: R.Tensor((1, 512, 64, 64), dtype="float32")) -> R.Tensor((1, 
512, 64, 64), dtype="float32"):
           cls = Module
           with R.dataflow():
               gv1 = R.call_tir(cls.relu, (x), out_sinfo=R.Tensor((1, 512, 64, 
64)))  # crash
               # gv1 = R.nn.relu(x)  # run well
               # gv1 = R.call_tir(cls.relu, (x,), out_sinfo=R.Tensor((1, 512, 
64, 64), dtype="float32"))  # run well
               R.output(gv1)
           return gv1
   
   mod = Module
   mod.show()
   
   mod = relax.transform.FuseTIR()(mod)
   mod = relax.transform.LambdaLift()(mod)
   ex = relax.build(mod, target='llvm')
   ```
   
   cc @Lunderberg @tqchen 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to