xkrogen commented on PR #4209: URL: https://github.com/apache/hadoop/pull/4209#issuecomment-1115302581
I think this change is a bit too restrictive. There may well be valid use cases for setting it above the 90% threshold. For example if you configured a 100GB heap, you really don't need 10GB of non-cache overhead, so you could safely allocate 95GB for the cache. If we want to add fail-fast behavior, I would say it should only apply when `cache size >= heap size`. This is clearly invalid -- you need at least _some_ overhead heap memory. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
