wrongtest edited a comment on pull request #6062:
URL: https://github.com/apache/incubator-tvm/pull/6062#issuecomment-659137025


   > Thanks for the PR! I have two questions other than the comments:
   > 
   > 1. How to trigger this change (i.e., `to_batch=false`) for an end user? It 
seems to me that you can configure it only by manually modifying the 
build_module or VM compiler and rebuilding TVM.
   > 2. IIUC, the reason that `ParallelDenseFlatCombiner` derived from 
`ParallelOpCombiner` instead of `ParallelOpBatchCombiner` is it requires 
special processes to almost every function, so it seems no benefit to derive 
from `ParallelOpBatchCombiner`. Now the class hierarchy becomes:
   >    
   >    * `ParallelDenseBatchCombiner` <- `ParallelOpBatchCombiner` <- 
`ParallelOpCombiner `
   >    * `ParallelDenseFlatCombiner ` <----------------------------------|
   >    
   >    Since I didn't find any other classes derived from 
`ParallelOpBatchCombiner`, should we simplify `ParallelOpBatchCombiner` class 
if we cannot make both `ParallelDense*Combiner` derive from it?
   
   Thanks for your comments !
   - In our practice we  just manually call Python api `mod = 
relay.transform.CombineParallelDense(3, False)(mod)`. 
   Because this pass will change the shape, currently we have to manually call 
it (or relay.optimize(mod) for default optimization) before any auto-tuning 
step to consider the combined kernel shape and then build.
   
   - How about improve to make `ParallelOpBatchCombiner` as an exposed optional 
pass (maybe in another PR)? It can be used like `mod = 
CombineParallelOpToBatch("op_name", "batch_op_name", 3)`. This may serve the 
original idea of this class and users can combine various kinds of op flexibly. 
Of course, the use case may be rare in common network structures.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to