roastduck opened a new issue #5130: Wrong boundary check for GPU blocks / threads indicies URL: https://github.com/apache/incubator-tvm/issues/5130 The boundary check for GPU blocks / threads indices is wrong when they are used inside a loop range (loop condition). Code is better than words: Algorithm: ```python def f(i, j): p_begin, p_end = row_ptr[i], row_ptr[i + 1] p_axis = tvm.te.reduce_axis((0, p_end - p_begin), name='p_axis') p = p_axis + p_begin # To avoid #4929 pp_begin, pp_end = tile_ptr[p], tile_ptr[p + 1] pp_axis = tvm.te.reduce_axis((0, pp_end - pp_begin), name='pp_axis') pp_idx = pp_axis + pp_begin # To avoid #4929 x = A_val[pp_idx] y = B[A_idx[pp_idx], j] return tvm.te.sum(x * y, axis=(p_axis, pp_axis)) return tvm.te.compute((m, n), f, name='C') ``` *PS. (not related to this issue) Issue #4929 should be explicitly avoided here.* Schedule: ```python s = tvm.te.create_schedule(C.op) axis = s[C].fuse(*C.op.axis) blk, th = s[C].split(axis, factor=256) s[C].bind(blk, tvm.te.thread_axis('blockIdx.x')) s[C].bind(th, tvm.te.thread_axis('threadIdx.x')) ``` The generated CUDA code is like this: ```cuda extern "C" __global__ void csrmm_kernel0(void** __restrict__ C, void** __restrict__ row_ptr, void** __restrict__ tile_ptr, void** __restrict__ A_val, void** __restrict__ B, void** __restrict__ A_idx) { if (((((int)blockIdx.x) * 8) + (((int)threadIdx.x) >> 5)) < 21954) { if (((((int)blockIdx.x) * 256) + ((int)threadIdx.x)) < 702528) { (( float*)C)[(((((int)blockIdx.x) * 256) + ((int)threadIdx.x)))] = 0.000000e+00f; } } for (int p_axis = 0; p_axis < ((( int*)row_ptr)[((((((int)blockIdx.x) * 8) + (((int)threadIdx.x) >> 5)) + 1))] - (( int*)row_ptr)[(((((int)blockIdx.x) * 8) + (((int)threadIdx.x) >> 5)))]); ++p_axis) { for (int pp_axis = 0; pp_axis < ((( int*)tile_ptr)[(((p_axis + (( int*)row_ptr)[(((((int)blockIdx.x) * 8) + (((int)threadIdx.x) >> 5)))]) + 1))] - (( int*)tile_ptr)[((p_axis + (( int*)row_ptr)[(((((int)blockIdx.x) * 8) + (((int)threadIdx.x) >> 5)))]))]); ++pp_axis) { if (((((int)blockIdx.x) * 8) + (((int)threadIdx.x) >> 5)) < 21954) { if (((((int)blockIdx.x) * 256) + ((int)threadIdx.x)) < 702528) { if (p_axis < ((( int*)row_ptr)[((((((int)blockIdx.x) * 8) + (((int)threadIdx.x) >> 5)) + 1))] - (( int*)row_ptr)[(((((int)blockIdx.x) * 8) + (((int)threadIdx.x) >> 5)))])) { if (pp_axis < ((( int*)tile_ptr)[(((p_axis + (( int*)row_ptr)[(((((int)blockIdx.x) * 8) + (((int)threadIdx.x) >> 5)))]) + 1))] - (( int*)tile_ptr)[((p_axis + (( int*)row_ptr)[(((((int)blockIdx.x) * 8) + (((int)threadIdx.x) >> 5)))]))])) { (( float*)C)[(((((int)blockIdx.x) * 256) + ((int)threadIdx.x)))] = ((( float*)C)[(((((int)blockIdx.x) * 256) + ((int)threadIdx.x)))] + ((( float*)A_val)[((pp_axis + (( int*)tile_ptr)[((p_axis + (( int*)row_ptr)[(((((int)blockIdx.x) * 8) + (((int)threadIdx.x) >> 5)))]))]))] * (( float*)B)[((((( int*)A_idx)[((pp_axis + (( int*)tile_ptr)[((p_axis + (( int*)row_ptr)[(((((int)blockIdx.x) * 8) + (((int)threadIdx.x) >> 5)))]))]))] * 32) + (((int)threadIdx.x) & 31)))])); } } } } } } } ``` Please look at the `< 21954` and the `< 702528` check **inside** the loops. I think it should be **outside** the loops, because the loops are using these thread indices as array indices. This problem leads to an illegal memory access in my program. Testing with the latest version (fdc8b0dd1763aece4ce457a7baf522c2989ac6c4).
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
