FrozenGene commented on pull request #7573:
URL: https://github.com/apache/tvm/pull/7573#issuecomment-792666127


   > > @zhuochenKIDD Do you have a guess as to why this is faster than the 
for-loop launch approach?
   > 
   > @tkonolige not sure why it's faster,it's based on test,and depends on 
workloads.
   > I guest for my model, it has many tiny kernels which is more kernel-launch 
bound, and cuda-graph might reduce kernel-launch overhead, I will do more 
profiling & analysis,do you have some suggestion?
   
   I think the answer is here: 
https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#cuda-graphs
   
   > This allows a graph to be defined once and then launched repeatedly. 
Separating out the definition of a graph from its execution enables a number of 
optimizations: first, CPU launch costs are reduced compared to streams, because 
much of the setup is done in advance; second, presenting the whole workflow to 
CUDA enables optimizations which might not be possible with the piecewise work 
submission mechanism of streams.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to