junrushao commented on code in PR #13421:
URL: https://github.com/apache/tvm/pull/13421#discussion_r1025696792


##########
src/meta_schedule/schedule/cuda/thread_bind.cc:
##########
@@ -55,14 +55,14 @@ std::function<ExprRV(int64_t)> MakeFactorSampler(Schedule 
sch, Array<Integer> th
 
 Array<LoopRV> BindSpatialLoop(Schedule sch, LoopRV loop, int64_t 
max_threadblocks,
                               int64_t max_threads_per_block,
-                              std::function<ExprRV(int64_t)> get_factor) {
+                              std::function<ExprRV(int64_t)> get_factor, bool 
allow_reorder) {
   int64_t extent = -1;
   if (const int64_t* e = as_const_int(sch->Get(loop)->extent)) {
     extent = *e;
   } else {
     extent = std::numeric_limits<int64_t>::max();
   }
-  if (extent <= max_threadblocks * max_threads_per_block) {
+  if (extent <= max_threadblocks * max_threads_per_block || !allow_reorder) {
     if (!get_factor) {
       get_factor = MakeFactorSampler(sch, {32, 64, 128, 256, 512, 1024});
     }

Review Comment:
   let's instead do this:
   
   ```c++
   if (extent <= xxx) {
   } else if (allow_reorder) {
     split by [None, max_threadblocks, max_threads_per_block];
   } else {
     split by [max_threadblocks, max_threads_per_block, None];
   }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to