tgravescs commented on pull request #35504:
URL: https://github.com/apache/spark/pull/35504#issuecomment-1043053330


   so overall I'm fine with the concept and think should be consistent across 
spark.  But going back to how/when this config was added, I believe it was 
specifically decided at that time to not have a configuration by percent on 
YARN.  If I recall that was based on the use cases at the time so there very 
well be more/different use cases or we have just more experience.  I think some 
of that is many of the off heap type configs would be specific size (ie 
offheap=5g).  Can I ask what use cases this is targeting?
   
   If this goes in docs need to be clear on precedence of the configs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to