syang-ng opened a new pull request #9485:
URL: https://github.com/apache/tvm/pull/9485


   Here is a PR to fix the `divide by zero` bug mentioned in this 
[post](https://discuss.tvm.apache.org/t/divide-by-zero-error-in-tir-pass-lower-warp-memory/11433).
 In short, when applying TIR pass `lower_warp_memory`,  it will re-calculate 
the size and then allocate. And if the variable `factor` is zero, it can cause 
such an error.
   
   
https://github.com/syang-ng/tvm/blob/main/src/tir/transforms/lower_warp_memory.cc#L222-L227
   
   Here is the code example to reproduce the bug:
   
   ```python
   import tvm
   from tvm import te, tir
   
   ib = tir.ir_builder.IRBuilder()
   bx = te.thread_axis("blockIdx.x")
   tx = te.thread_axis("threadIdx.x")
   
   with ib.new_scope():
       ib.scope_attr(bx, "thread_extent", 32)
       ib.scope_attr(tx, "thread_extent", 32)
       t = ib.allocate("float32", 16, name="t", scope="warp")
       n = ib.allocate("float32", 16, name="n", scope="local")
       n[0] = t[0]
   
   stmt = ib.get()
   f = tvm.tir.PrimFunc([], stmt)
   f = f.with_attr('from_legacy_te_schedule', True)
   m = tvm.lower(f)
   tvm.build(m, target=tvm.target.Target('cuda'))
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to