Github user tgravescs commented on the pull request:

    https://github.com/apache/spark/pull/3686#issuecomment-67003632
  
    @sryza I understand what you are saying but I don't see anywhere in this 
pull request that the yarn-client AM is referred to as the driver, conf in 
current code is spark.yarn.am.cores,  am I missing something?  
    
    I didn't have time to do a full review last week, but I was leaning towards 
like what @vanzin mentioned and reusing the driver-cores option to specify 
specify cores in yarn-cluster mode, which is why I mentioned that option in my 
original post also. That way it matches other things like driver-memory, etc.   
Sorry for any confusion on my comment, it wasn't intended to be a full review, 
just answering the question from @scwf.   
    
    The current conf specified (spark.yarn.am.cores) would then work in client 
mode.  thoughts or objections to that?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to