Github user tgravescs commented on the pull request:

    https://github.com/apache/spark/pull/2253#issuecomment-54502879
  
    Hey Mridul,
    
    Yarn actually handles doing the multiple for you.  Perhaps really early 
0.23 builds didn't do this.   For instance minimum container size of 512m and 
you ask for 700m, it will round that up and give you a 1g container.   
Obviously one thing it doesn't do now is change your heap size.  It will be 
exactly what the user requests vs us rounding it up.  If we think that is 
important then we can put it back in.  I am leaning on the side of removing 
unneeded logic and giving the user the size they really request. 
    
    Also as you will notice that logic hasn't been there in the hadoop 2.x port 
anyway which I think more people are using at this point.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to