ANSHUMAN87 opened a new pull request #7158:
URL: https://github.com/apache/tvm/pull/7158
We do not need additional thread extent when the tensor size is known.
Below output shows the impact of the change, taken Transpose Op for
illustration.
Before:
```
producer_realize T_transpose([0, 3], [0, 4]) {
// attr [iter_var(blockIdx.x, , blockIdx.x)] thread_extent = 1
// attr [iter_var(threadIdx.x, , threadIdx.x)] thread_extent = 1024
if (tir.likely((floordiv(threadIdx.x, 4) < 3))) {
if (tir.likely((threadIdx.x < 12))) {
T_transpose[floordiv(threadIdx.x, 4), floormod(threadIdx.x, 4)]
=placeholder[floormod(threadIdx.x, 4), floordiv(threadIdx.x, 4)]
}
}
}
```
After:
```
producer_realize T_transpose([0, 3], [0, 4]) {
// attr [iter_var(blockIdx.x, , blockIdx.x)] thread_extent = 1
// attr [iter_var(threadIdx.x, , threadIdx.x)] thread_extent = 12
T_transpose[floordiv(threadIdx.x, 4), floormod(threadIdx.x, 4)]
=placeholder[floormod(threadIdx.x, 4), floordiv(threadIdx.x, 4)]
}
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]