abhishekd0907 commented on pull request #35858:
URL: https://github.com/apache/spark/pull/35858#issuecomment-1072285103


   > Yeah, the resources available on a large cluster can change very rapidly 
and it should not be relied upon. I guess your proposal here is to specifically 
request hosts? In some ways this is like the locality requests, but there is no 
way to guarantee what YARN told you was available in one heartbeat will still 
be available in the next one. Spark can figure out what it wants for 
requirements - locality for data, networks, etc.. but seems very perilous to 
try to assume we can know what YARN is doing. Even with likely data locality, 
generally you have 3 replicas and request 3 hosts and it only tries to get 
those for a limited amount of time. I have seen way to many times we request 
specific hosts and jobs take longer because of it vs just running on what is 
available (which YARN decides).
   > 
   > In the end what is your end goal by making these changes?
   
   @tgravescs @mridulm 
   In most of the Yarn Cluster setups, starting a new executor container on an 
already existing node has very small latency (a few seconds) but bringing up a 
new node might take more time (order of few hundred seconds). Currently, 
Dynamic Allocation doesn't know the number of nodes immediately available and 
it just requests executors based on the parallelism requirement of active 
stages. So the requested executors may be a) allocated immediately (if there 
are free nodes in Yarn cluster) or b) requested executors may be allocated with 
a few mins of delay (if there are no free nodes and Yarn needs to request new 
nodes). However, if no nodes are immediately available in the cluster and the 
latency to add a new node is high, it may not make sense to request for more 
executors since all the tasks in the active stage may finish off before a new 
executor can get allocated. Hence the cluster scale up is wasted.  For example, 
if spark application is running on a single one-core executor and t
 here is one active stage, with 100 pending tasks, and average task time of 
tasks completed so far is 1 second, then expected time to complete the stage 
with single executor will be 100 seconds. If the latency to bring up a new node 
is 2 minutes (120 seconds), then it doesn't make sense to request executors 
because all tasks will be finished before the second executor is added. 
However, if there is a free node already present in the cluster, a new executor 
can be started on that node immediately, and some of the pending tasks can be 
scheduled on the new executor.
   
   I agree with your assessment that resources available on a large cluster can 
change very rapidly and there is no way to guarantee what YARN told you was 
available in one heartbeat will still be available in the next one, especially 
if there are multiple applications running on the cluster and all of them are 
dynamically requesting for resources. However, we can write a best effort 
self-correcting dynamic allocation logic which can avoid the wasted scaleups 
discussed above. If the information provided by Yarn is incorrect, Dynamic 
Allocation can request for more executors in the next iteration. Moreover, 
since the Dynamic Allocation logic will be configurable, user can choose to use 
it only for workloads where it works best, for example where single application 
runs on Spark cluster at a time and information of immediately available 
resources provided by Yarn is relatively more reliable. 
   
   Let me know your thoughts.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to