Github user squito commented on the issue:

    https://github.com/apache/spark/pull/21068
  
    Tom and I had a chance to discuss this in person, and after some back and 
forth I think we decided that maybe its best to remove the limit but have the 
application fail if the entire cluster is blacklisted.  @tgravescs does that 
sound correct?
    
    I mentioned this briefly to @attilapiros and he mentioned that might be 
hard, but instead you could stop allocation blacklisting which would result in 
the usual yarn app failure from too many executors.  He's going to look at this 
a little more closely and report back here.  I'd be OK with that -- the main 
goal is just make sure that an app doesn't hang if you've blacklisted the 
entire cluster.  I'm pretty sure that's @tgravescs main concern as well.  (If 
the only reasonable way to do that is with the existing limit, I'm fine w/ that 
too.)


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to