Github user tgravescs commented on the issue:

    https://github.com/apache/spark/pull/18874
  
    To answer a few of your last questions. 
    It doesn't hurt the common case, the common case is all your executors have 
tasks on them as long as there are tasks to run.  Normally scheduler can fill 
up the executors.  It will use more resources if the scheduler takes time to 
put tasks on them, but that versus the time wasted in jobs that don't have 
enough executors to run on is hard to quantify because its going to be so 
application dependent.    yes it is a behavior change but  a behavior change 
that is fixing an issue. 
    
    I would much rather see us doing as much as possible to make things work 
and be as fast as possible for the user.  This is another reason I don't think 
a user should have to change configs for this.   
    
    Like I've mentioned before, the other approach would be to let them idle 
timeout and then go back later to get more and see if they can be used.  This 
again is a trade off.  The only other real way to fix this is for us to flip 
this and actually have the scheduler tell us exactly which nodes it wants  when 
it wants them and we go get them.  The problem still is with yarn we aren't 
guaranteed the exact node.  That is also a much bigger architectural change 
though.
    
    for GC yes your job might have other issues, other things like node 
slowdown or network slowness have nothing to do with your job.   Again Spark 
should be resilient to any weird errors and do its best to make things run well.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to