Github user vanzin commented on the pull request:

    https://github.com/apache/spark/pull/2746#issuecomment-59295974
  
    I agree with @pwendell that the retry thing looks a little convoluted. 
Especially since I don't see any way for the backend to report back the status 
of requests (e.g. has this executor request failed or is it just taking a long 
time?).
    
    It would be nice to have this channel back so that a proper status can be 
reported by the backend. e.g. let's say that you hit some limit in the number 
of executors you can allocate, the current code will keep retrying that many 
times fruitlessly until it decides to give up. That could be avoided if the 
backend could just say "I can't allocate more executors".
    
    Also, it doesn't seem like there's a way for the backend to tell whether a 
`requestExecutor` call is a new one or a retry. For Yarn, at least, that makes 
a difference; you don't want to create new requests if you're retrying, or 
you'll end up allocating more executors than intended. (This kinda loops back 
into the comment above about communicating request status back to the 
requestor.)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to