ConvolutedDog commented on issue #18481:
URL: https://github.com/apache/tvm/issues/18481#issuecomment-3569641411

   It appears the TOTAL_TRIALS parameter was set too low, resulting in some 
kernels not being tuned at all.
   
   The ResNet18 model is decomposed into 20 distinct tuning tasks (one per 
operator, e.g., conv/relu). The `MetaScheduleTuneIRMod` pass accepts a 
`max_trials_per_task` parameter to limit the trials for each task.
   
https://github.com/apache/tvm/blob/91c1921210adb5a911ee133ca35b46cdea472843/src/relax/transform/meta_schedule.cc#L151-L165
   
   However, this parameter is not exposed to users in the `e2e_opt_model.py` 
script or the `static_shape_tuning_pipeline` function.
   
https://github.com/apache/tvm/blob/91c1921210adb5a911ee133ca35b46cdea472843/docs/how_to/tutorials/e2e_opt_model.py#L104
   
https://github.com/apache/tvm/blob/91c1921210adb5a911ee133ca35b46cdea472843/python/tvm/relax/pipeline.py#L109-L114
   
   Consequently, the pass defaults to using `TOTAL_TRIALS` for 
`max_trials_per_task`, which can result in some tasks receiving no tuning at 
all and lacking crucial optimizations like thread binding.
   
https://github.com/apache/tvm/blob/91c1921210adb5a911ee133ca35b46cdea472843/src/relax/transform/meta_schedule.cc#L157
   
   I previously wanted to add a parameter to control the maximum number of 
iterations per task in `e2e_opt_model.py`. You can refer to this PR: 
https://github.com/apache/tvm/pull/18159


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to