Github user tgravescs commented on the issue:

    https://github.com/apache/spark/pull/16819
  
    I agree with others, this is not the way to do this. There are different 
schedulers in yarn, each with different configs that could affect the actual 
resources you get. 
    
    If you want to do something like this it should look at the available 
resources after calling the allocate call to yarn 
(allocateResponse.getAvailableResources).  When yarn returns it tells you the 
available resources, which takes into account  the various scheduler things. 
    
    MapReduce refers to that as headroom and uses it to determine things like 
if it needs to kill a reducer to run a map.  We could use this to help with 
dynamic allocation and do more intelligent things.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to