Github user vanzin commented on the pull request:

    https://github.com/apache/spark/pull/3686#issuecomment-67876245
  
    So, after actually reading the code :-), the current implementation uses 
`spark.yarn.am.cores` for both client and cluster mode.
    
    I think that's bad, because if someone sets this in the default 
configuration, and uses both client and cluster mode in different apps, then 
they may be over-allocating resources in client mode. I think it's better 
practice, as with other options, to separate the two modes to avoid confusion.
    
    With that in mind, this is my suggestion:
    - Use `spark.yarn.am.cores` for client mode, much like the current code.
    - Add a separate option (`spark.driver.cores`?) that is used in cluster 
mode, and tie it to the `--driver-cores` command line option that already 
exists.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to