jerqi commented on PR #307:
URL: 
https://github.com/apache/incubator-uniffle/pull/307#issuecomment-1311551218

   > > > > We can read the executor configuration of spark through 
configuration, but the actual app running process may not be able to allocate 
so many resources.
   > > > 
   > > > 
   > > > If we use dynamic allocation, we can't know the number of executors. 
So I think we can give a configuration first, user can set that value. 
Similarly ByteDance Shuffle Service give the concurrency tasks through an 
experience formula, you can see 
https://github.com/bytedance/CloudShuffleService/blob/ef0ffb3f43f9f6e96af49629aed2a6ce61a6a2ab/spark-shuffle-manager-2/src/main/scala/org/apache/spark/shuffle/css/CssShuffleManager.scala#L64
   > > 
   > > 
   > > Yes. This optimization has been applied in our internal uniffle, it 
works well.
   > 
   > Maybe we can apply this feature to our community and estimate the number 
of ShuffleServers needed according to the number of concurrent tasks.
   
   Would you contribute this feature and let the @zuston help you review this 
feature?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to