[ 
https://issues.apache.org/jira/browse/HBASE-13874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14579722#comment-14579722
 ] 

Esteban Gutierrez commented on HBASE-13874:
-------------------------------------------

[~vrodionov] what is we do in checkForClusterFreeMemoryLimit():

{code}
float globalMemstoreSize = getGlobalMemStorePercent(conf, false);
    int gml = (int)(globalMemstoreSize * CONVERT_TO_PERCENTAGE);
    float blockCacheUpperLimit = getBlockCacheHeapPercent(conf);
    int bcul = (int)(blockCacheUpperLimit * CONVERT_TO_PERCENTAGE);
    float minimumMemoryThreshold = 1 - 
conf.getFloat(HBASE_CLUSTER_MINIMUM_MEMORY_THRESHOLD_KEY,
          HBASE_CLUSTER_MINIMUM_MEMORY_THRESHOLD_DEFAULT);
    if (CONVERT_TO_PERCENTAGE - (gml + bcul)
            < (int)(CONVERT_TO_PERCENTAGE * minimumMemoryThreshold)) {
      throw new RuntimeException("Current heap configuration for MemStore and 
BlockCache exceeds "
          + "the threshold required for successful cluster operation. "
{code}

So if (block cack upper limit) + (global memstore size) > 
minimumMemoryThreshold then we throw a RuntimeException. Is that what you are 
looking for [~vrodionov]?

> Fix 0.8 being hardcoded sum of blockcache + memstore; doesn't make sense when 
> big heap
> --------------------------------------------------------------------------------------
>
>                 Key: HBASE-13874
>                 URL: https://issues.apache.org/jira/browse/HBASE-13874
>             Project: HBase
>          Issue Type: Task
>            Reporter: stack
>            Assignee: Esteban Gutierrez
>         Attachments: 
> 0001-HBASE-13874-Fix-0.8-being-hardcoded-sum-of-blockcach.patch
>
>
> Fix this in HBaseConfiguration:
> {code}
>  79   private static void checkForClusterFreeMemoryLimit(Configuration conf) {
>  80       float globalMemstoreLimit = 
> conf.getFloat("hbase.regionserver.global.memstore.upperLimit", 0.4f);
>  81       int gml = (int)(globalMemstoreLimit * CONVERT_TO_PERCENTAGE);
>  82       float blockCacheUpperLimit =
>  83         conf.getFloat(HConstants.HFILE_BLOCK_CACHE_SIZE_KEY,
>  84           HConstants.HFILE_BLOCK_CACHE_SIZE_DEFAULT);
>  85       int bcul = (int)(blockCacheUpperLimit * CONVERT_TO_PERCENTAGE);
>  86       if (CONVERT_TO_PERCENTAGE - (gml + bcul)
>  87               < (int)(CONVERT_TO_PERCENTAGE *
>  88                       HConstants.HBASE_CLUSTER_MINIMUM_MEMORY_THRESHOLD)) 
> {
>  89           throw new RuntimeException(
>  90             "Current heap configuration for MemStore and BlockCache 
> exceeds " +
>  91             "the threshold required for successful cluster operation. " +
>  92             "The combined value cannot exceed 0.8. Please check " +
>  93             "the settings for 
> hbase.regionserver.global.memstore.upperLimit and " +
>  94             "hfile.block.cache.size in your configuration. " +
>  95             "hbase.regionserver.global.memstore.upperLimit is " +
>  96             globalMemstoreLimit +
>  97             " hfile.block.cache.size is " + blockCacheUpperLimit);
>  98       }
>  99   }
> {code}
> Hardcoding 0.8 doesn't make much sense in a heap of 100G+ (that is 20G over 
> for hbase itself -- more than enough).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to