tomscut commented on PR #4209:
URL: https://github.com/apache/hadoop/pull/4209#issuecomment-1115568961

   > I think this change is a bit too restrictive. There may well be valid use 
cases for setting it above the 90% threshold. For example if you configured a 
100GB heap, you really don't need 10GB of non-cache overhead, so you could 
safely allocate 95GB for the cache.
   > 
   > If we want to add fail-fast behavior, I would say it should only apply 
when `cache size >= heap size`. This is clearly invalid -- you need at least 
_some_ overhead heap memory.
   > 
   > Alternatively, you could make the 90% threshold configurable, and point 
users to a config they can adjust if they really want to exceed it. But I think 
this may be overkill.
   
   Thanks @xkrogen for review and comments.
   
   Maybe we can do this:
   Do not set the `cache size` to a fixed value, but to the ratio of maximum 
memory, which is 0.2 by default. 
   This avoids the problem of too large cache size. In addition, users can 
actively adjust the heap size when they need to increase the cache size.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to