Github user pwendell commented on the pull request:

    https://github.com/apache/spark/pull/634#issuecomment-42149808
  
    Ah I see - sorry I didn't read the JIRA.
    
    So since we don't know a-priori how many executors to expect, I don't think 
we can wait on any particular condition. Even in YARN mode, AFAIK the executor 
number is just a request to the RM and not a guarentee.
    
    I'm still not 100% sure why this needs to exist. @mridulm  is the main 
concern here just launching non-local tasks? If so, why not just set 
`spark.locality.wait` threshold (that's the whole reason it exists).
    
    In standalone mode this may not really be necessary because the executors 
typically launch in under 3 seconds which is the default locality wait.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to