jikechao opened a new issue, #14745:
URL: https://github.com/apache/tvm/issues/14745

   For the reshape operator, `target_shape=(1, -1)` is supported. However, in 
the `InferType`, It crashed and threw `Check failed: infer_idx < 0 (0 vs. 0) : 
One and only one dim can be inferred`. 
   
   Shall we add extra checking to check the dimension of `target_shape`  or 
support inferrence for more than one dims? 
   
   ### Actual behavior
   ```
       model = relay.build_module.create_executor("vm", mod, tvm.cpu(0), 
'llvm', params).evaluate()
     File 
"/workplace/software/tvm/tvm_/python/tvm/relay/backend/interpreter.py", line 
171, in evaluate
       return self._make_executor()
     File "/workplace/software/tvm/tvm_/python/tvm/relay/backend/vm.py", line 
219, in _make_executor
       self.executable = compile(self.mod, self.target)
     File "/workplace/software/tvm/tvm_/python/tvm/relay/backend/vm.py", line 
67, in compile
       compiler.lower(mod, target, target_host)
     File "/workplace/software/tvm/tvm_/python/tvm/relay/backend/vm.py", line 
126, in lower
       self._lower(mod, raw_targets)
     File 
"/workplace/software/tvm/tvm_/python/tvm/_ffi/_ctypes/packed_func.py", line 
237, in __call__
       raise get_last_ffi_error()
   tvm._ffi.base.TVMError: Traceback (most recent call last):
     15: TVMFuncCall
     14: 
tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::relay::vm::VMCompiler::GetFunction(std::__cxx11::basic_string<char,
 std::char_traits<char>, std::allocator<char> > const&, 
tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::$_0> 
>::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, 
tvm::runtime::TVMRetValue*)
     13: tvm::relay::vm::VMCompiler::Lower(tvm::IRModule, 
tvm::runtime::Array<tvm::Target, void> const&)
     12: tvm::relay::vm::VMCompiler::LowerImpl(tvm::IRModule)
     11: tvm::relay::vm::VMCompiler::OptimizeModuleImpl(tvm::IRModule)
     10: tvm::transform::Pass::operator()(tvm::IRModule) const
     9: tvm::transform::Pass::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     8: tvm::transform::SequentialNode::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     7: tvm::transform::Pass::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     6: tvm::transform::SequentialNode::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     5: tvm::transform::Pass::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     4: tvm::transform::ModulePassNode::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     3: 
tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule
 (tvm::IRModule, 
tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::InferType()::$_2>(tvm::relay::transform::InferType()::$_2)::{lambda(tvm::runtime::TVMArgs
 const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj 
const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
     2: tvm::relay::TypeInferencer::Infer(tvm::GlobalVar, tvm::relay::Function)
     1: tvm::relay::TypeSolver::Solve()
     0: _ZN3tvm7runtime6detail
     20: TVMFuncCall
     19: 
tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::relay::vm::VMCompiler::GetFunction(std::__cxx11::basic_string<char,
 std::char_traits<char>, std::allocator<char> > const&, 
tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::$_0> 
>::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, 
tvm::runtime::TVMRetValue*)
     18: tvm::relay::vm::VMCompiler::Lower(tvm::IRModule, 
tvm::runtime::Array<tvm::Target, void> const&)
     17: tvm::relay::vm::VMCompiler::LowerImpl(tvm::IRModule)
     16: tvm::relay::vm::VMCompiler::OptimizeModuleImpl(tvm::IRModule)
     15: tvm::transform::Pass::operator()(tvm::IRModule) const
     14: tvm::transform::Pass::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     13: tvm::transform::SequentialNode::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     12: tvm::transform::Pass::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     11: tvm::transform::SequentialNode::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     10: tvm::transform::Pass::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     9: tvm::transform::ModulePassNode::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     8: 
tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule
 (tvm::IRModule, 
tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::InferType()::$_2>(tvm::relay::transform::InferType()::$_2)::{lambda(tvm::runtime::TVMArgs
 const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj 
const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
     7: tvm::relay::TypeInferencer::Infer(tvm::GlobalVar, tvm::relay::Function)
     6: tvm::relay::TypeSolver::Solve()
     5: tvm::TypedEnvFunc<bool (tvm::runtime::Array<tvm::Type, void> const&, 
int, tvm::Attrs const&, tvm::TypeReporter 
const&)>::operator()(tvm::runtime::Array<tvm::Type, void> const&, int, 
tvm::Attrs const&, tvm::TypeReporter const&) const
     4: _ZN3tvm7runtime13Pac
     3: tvm::runtime::TypedPackedFunc<bool (tvm::runtime::Array<tvm::Type, 
void> const&, int, tvm::Attrs const&, tvm::TypeReporter 
const&)>::AssignTypedLambda<bool (*)(tvm::runtime::Array<tvm::Type, void> 
const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>(bool 
(*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, 
tvm::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, 
tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, 
tvm::runtime::TVMRetValue*) const
     2: tvm::relay::ReshapeRel(tvm::runtime::Array<tvm::Type, void> const&, 
int, tvm::Attrs const&, tvm::TypeReporter const&)
     1: tvm::relay::InferNewShape(tvm::runtime::Array<tvm::PrimExpr, void> 
const&, tvm::Attrs const&, bool)
     0: _ZN3tvm7runtime6detail
     File "/workplace/software/tvm/tvm_/src/relay/analysis/type_solver.cc", 
line 643
   TVMError: 
   ---------------------------------------------------------------
   An error occurred during the execution of TVM.
   For more information, please see: https://tvm.apache.org/docs/errors.html
   ---------------------------------------------------------------
     Check failed: (false) is false: [15:36:43] 
/workplace/software/tvm/tvm_/src/relay/op/tensor/transform.cc:651: 
   ---------------------------------------------------------------
   An error occurred during the execution of TVM.
   For more information, please see: https://tvm.apache.org/docs/errors.html
   ---------------------------------------------------------------
   
     Check failed: infer_idx < 0 (0 vs. 0) : One and only one dim can be 
inferred
   ```
   
   ### Expected behavior
   
   
   
   ### Reproducable example
   ```
   import tvm
   import tvm.relay as relay
   from tensorflow import keras
   from tensorflow.keras import layers, models
   
   input_shape = (3, 2, 4)
   x = layers.Input(shape=input_shape[1:], dtype='float32')
   
   layer = keras.layers.Reshape(target_shape=(1, -1))
   layer.set_weights(layer.get_weights())
   
   y = layer(x)
   model = models.Model(x, y)
   
   mod, params = relay.frontend.from_keras(model, {'input_1': input_shape})
   print(mod)
   
   with tvm.transform.PassContext(opt_level=3):
       model = relay.build_module.create_executor("vm", mod, tvm.cpu(0), 
'llvm', params).evaluate()  # crash
   ```
   
   Triage
   * relay
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to