Why I am getting this issue? 
Doesn't we have sm_86 schedules available when we install TVM from source?


tvmgen_default_fused_nn_conv2d_expand_dims_add_71                               
                                                                                
                                         
Cannot find tuned schedules for target=cuda -keys=cuda,gpu -arch=sm_86 
-max_num_threads=1024 -thread_warp_size=32, 
workload_key=["20675e5640e1ea8eef79fda7ff31be4c", [2, 128, 52, 52], [256, 128, 
3, 3], 
[256, 1, 1], [2, 256, 52, 52]]. A fallback TOPI schedule is used, which may 
bring great performance regression or even compilation failure. Compute DAG 
info:                                            
FunctionVar_1_0 = PLACEHOLDER [2, 128, 52, 52]                                  
                                                                                
                                         
pad_temp(i0, i1, i2, i3) = tir.if_then_else(((((i2 >= 1) && (i2 < 53)) && (i3 
>= 1)) && (i3 < 53)), FunctionVar_1_0[i0, i1, (i2 - 1), (i3 - 1)], 0f)          
                                           
FunctionVar_1_1 = PLACEHOLDER [256, 128, 3, 3]                                  
                                                                                
                                         
conv2d_nchw(nn, ff, yy, xx) += (pad_temp[nn, rc, (yy + ry), (xx + 
rx)]*FunctionVar_1_1[ff, rc, ry, rx])                                           
                                                       
FunctionVar_1_2 = PLACEHOLDER [256, 1, 1]                                       
                                                                                
                                         
T_expand_dims(ax0, ax1, ax2, ax3) = FunctionVar_1_2[ax1, ax2, ax3]              
                                                                                
                                         
T_add(ax0, ax1, ax2, ax3) = (conv2d_nchw[ax0, ax1, ax2, ax3] + T_expand_dims[0, 
ax1, 0, 0])





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/cannot-find-tuned-schedules-for-target-cuda-keys-cuda-gpu-arch-sm-86-max-num-threads-1024-thread-warp-size-32/17701/1)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/23fdaf6d99c0f7f265806d4f005f43a3a24b9292824628164438a670a0fbcb13).

Reply via email to