Github user Tagar commented on the issue:

    https://github.com/apache/spark/pull/19046
  
    > This could just be an adhoc queue but the spark users would lose out to 
tez/mapreduce users. I'm pretty positive this will hurt spark users on some of 
our cluster so would want performance numbers to prove it doesn't. Otherwise 
would want the **config** to turn it off. Another way to possibly help this 
problem would be to ask for a certain percentage over the actual limit.
    
    I think this is the only concern that @tgravescs has? @vanzin would it be 
possible to make this new logic configurable?
    
    We have seen Spark's aggressive reservation requests cause certain 
problems, like yarn preemption doesn't kick in etc. It would be great to have 
this fix in.
    
    Thank you!



---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to