Github user rnowling commented on the pull request:

    https://github.com/apache/spark/pull/4027#issuecomment-167166741
  
    It's really unfortunate that this patch was closed without merging.  I 
disagree with @andrewor14 and others that it exposes to much to the average 
user.  It's much easier (for me) to think about the number of cores and amount 
of memory per NODE.    I actually think the current approach used in spark 
(total number of cores) is more confusing because the behavior changes.  The 
current approach also has the problem mentioned by @maasg -- no control over 
how resources are distributed over nodes.  For multi-user environment (where 
the resources may not be utilized only through a single system), the per node 
control makes it easier to prevent over-subscribing nodes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to