Github user jdesmet commented on the pull request:

    https://github.com/apache/spark/pull/9095#issuecomment-216370906
  
    However, memory reported in yarn ui on the containers seems to largely 
match with what I declared to use for the spark executors. Also capacity 
scheduler does have the option to use a resource calculator capable of 
accounting for cpu utilization. That makes me to (wrongly?) assume that 
capacity scheduler can take into account (measured?) memory and CPU 
utilization. 
    
    Sent from my iPhone
    
    > On May 2, 2016, at 10:39 AM, Marcelo Vanzin <[email protected]> 
wrote:
    > 
    > why we can't report the correct vCores
    > 
    > @jdesmet Spark is not reporting anything, and that's the part you are 
confused about. YARN does all its accounting correctly. If Spark were able to 
influence YARN's accounting, that would be a huge bug in YARN.
    > 
    > —
    > You are receiving this because you were mentioned.
    > Reply to this email directly or view it on GitHub
    > 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to