wrongtest-intellif opened a new pull request, #18421:
URL: https://github.com/apache/tvm/pull/18421

   A prototyping change to add `ForNode::step`
   
   It try add minimal codes to run following naive test cases.
   
   ```python
   import tvm
   import numpy as np
   from tvm.script import tir as T
   
   @T.prim_func
   def function(A: T.Buffer[(1024)], B: T.Buffer[(1024)], C: T.Buffer[(1024)]):
       for i in range(0, 100, 3):
           C[i] = A[i] + B[i]
       
   print(function)
   lib = tvm.compile(function, target="c")
   print(lib.mod.inspect_source())
   
   lib2 = tvm.compile(function, target="llvm")
   
   a = np.random.uniform(1, 100, [1024]).astype("float32")
   b = np.random.uniform(1, 100, [1024]).astype("float32")
   c = np.zeros([1024]).astype("float32")
   lib(a, b, c)
   c[:] = 0
   print(c[:])
   lib2(a, b, c)
   print(c[:])
   ```
   
   The aspects to check for a real roadmap may be
   1. Roundtrip support for TIR tvmscript grammar
   2. Correctness of TIR lowering pipeline
       - For **all transformations and analysis tools**, either it make 
adaptions to non-consecutive loop iteration indices, or loop canonicalization 
required. 
       - Ensure the original `ForNode::step` is not dropped by mutations on 
`ForNode`.
   3. Correctness of TensorIR schedule and MetaSchedule
       - Since many primitives depend on affine bindings. Loop canonicalization 
is required.
   4. CodeGen support
       - Check mainstream targets could support the loop step.
   5. Compatibility issues
       - Try to argue that the change would not affect existing works, since 
`ForNode` is an important construction in TVM.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to