cxx122 opened a new issue, #13019:
URL: https://github.com/apache/tvm/issues/13019

   When I use the dag.print_python_code_from_state(state) method to get 
schedule code and apply it to my TE program, I got an error:
   `Check failed: (!checked_type.defined()) is false: Expected tir.IterVar, but 
got Array`
   Then I check the code and found a problem here.
   The assignment of the tuple seems to miss the situation of one axis.
   `TENSOR_3_k = tuple(TENSOR_3.op.axis) + tuple(TENSOR_3.op.reduce_axis)`
   After this assignment, the TENSOR_3_k will be a tuple, not a tir.IterVar.
   A comma here may help solve this problem.
   `TENSOR_3_k, = tuple(TENSOR_3.op.axis) + tuple(TENSOR_3.op.reduce_axis)`
   ### Environment
   
   Operating System: Ubuntu 20.04, TVM version: tag v0.10.dev [0e23122]
   
   ### Steps to reproduce
   
   ```
   import tvm
   from tvm import te
   
   TENSOR_0 = te.placeholder([50], dtype="int64", name="TENSOR_0")
   TENSOR_1 = te.placeholder([48,48], dtype="uint64", name="TENSOR_1")
   TENSOR_2 = te.compute([46,46,46], lambda k,i,m:te.floor((TENSOR_0[m] + 
TENSOR_1[k,i])), name="TENSOR_2")
   TENSOR_3 = te.compute([44], lambda k:TENSOR_2[k,k,k], name="TENSOR_3")
   s = te.create_schedule([TENSOR_3.op,TENSOR_2.op])
   tensor_list = [TENSOR_0,TENSOR_1,TENSOR_2,TENSOR_3]
   
   
   TENSOR_2_k, TENSOR_2_i, TENSOR_2_m = tuple(TENSOR_2.op.axis) + 
tuple(TENSOR_2.op.reduce_axis)
   TENSOR_3_k = tuple(TENSOR_3.op.axis) + tuple(TENSOR_3.op.reduce_axis)
   s[TENSOR_2].compute_inline()
   print(type(TENSOR_3_k))
   s[TENSOR_3].parallel(TENSOR_3_k)
   
   tvm.lower(s, tensor_list, simple_mode=True)
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to