mehrdadh opened a new issue #9226: URL: https://github.com/apache/tvm/issues/9226
Currently if we define a relay.conv2d operator with specific size (see bellow) and apply [passes](https://github.com/apache/tvm/blob/d9b93d3ebdb449e31e397eb1155caac62454b0cd/tests/micro/zephyr/test_zephyr_armv7m.py#L84) to build with SIMD schedule, it generates error. ### Expected behavior It should compute without error. ### Actual behavior Error: ``` o<tvm::tir::IterVar>, std::allocator<std::pair<tvm::tir::IterVar const, tvm::Range> > > const&, std::unordered_map<tvm::tir::IterVar, tvm::Range, std::hash<tvm::tir::IterVar>, std::equal_to<tvm::tir::IterVar>, std::allocator<std::pair<tvm::tir::IterVar const, tvm::Range> > > const&, std::unordered_map<tvm::te::Tensor, tvm::runtime::Array<tvm::Range, void>, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::runtime::Array<tvm::Range, void> > > > const&, tvm::te::TensorIntrin const&) E File "/home/mhessar/mlperftiny/3rdparty/tvm/src/te/operation/tensorize.cc", line 339 E TVMError: E --------------------------------------------------------------- E An error occurred during the execution of TVM. E For more information, please see: https://tvm.apache.org/docs/errors.html E --------------------------------------------------------------- E Check failed: (expr_equal(lhs, rhs)) is false: Failed to match the compute with TensorIntrin tensor_intrin's declaration provided= reduce(combiner=comm_reducer(result=[(x + y)], lhs=[x], rhs=[y], identity_element=[0]), source=[(int32(a[(i*2), 0])*int32(b[j, 0]))], init=[], axis=[iter_var(k, range(min=0, ext=1))], where=(bool)1, value_index=0), intrin= reduce(combiner=comm_reducer(result=[(x + y)], lhs=[x], rhs=[y], identity_element=[0]), source=[(int32(a[i, 0])*int32(b[j, 0]))], init=[], axis=[iter_var(k, range(min=0, ext=1))], where=(bool)1, value_index=0) python/tvm/_ffi/_ctypes/packed_func.py:237: TVMError ``` ### Script to Generate ``` data = relay.var("data", relay.TensorType((1, 49, 10, 1), "int8")) weight = relay.var("weight", relay.TensorType((10, 4, 1, 64), "int8")) y = relay.nn.conv2d( data, weight, padding=(4, 1, 5, 1), strides=(2, 2), kernel_size=(10, 4), kernel_layout="HWIO", data_layout="NHWC", out_dtype="int32", ) f = relay.Function([data, weight], y) mod = tvm.IRModule.from_expr(f) mod = relay.transform.InferType()(mod) logging.info(mod) relay_mod_simd = _apply_desired_layout_simd(mod) target = tvm.target.target.micro( model, options=[ "-keys=arm_cpu,cpu", "-link-params=1", "--executor=aot", "--unpacked-api=1", "--interface-api=c", ], ) with tvm.transform.PassContext(opt_level=3, config={"tir.disable_vectorize": True}): lowered_simd = relay.build(relay_mod_simd, target) ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
