stack created HBASE-13874:
-----------------------------
Summary: Fix 0.8 being hardcoded sum of blockcache + memstore;
doesn't make sense when big heap
Key: HBASE-13874
URL: https://issues.apache.org/jira/browse/HBASE-13874
Project: HBase
Issue Type: Task
Reporter: stack
Fix this in HBaseConfiguration:
{code}
79 private static void checkForClusterFreeMemoryLimit(Configuration conf) {
80 float globalMemstoreLimit =
conf.getFloat("hbase.regionserver.global.memstore.upperLimit", 0.4f);
81 int gml = (int)(globalMemstoreLimit * CONVERT_TO_PERCENTAGE);
82 float blockCacheUpperLimit =
83 conf.getFloat(HConstants.HFILE_BLOCK_CACHE_SIZE_KEY,
84 HConstants.HFILE_BLOCK_CACHE_SIZE_DEFAULT);
85 int bcul = (int)(blockCacheUpperLimit * CONVERT_TO_PERCENTAGE);
86 if (CONVERT_TO_PERCENTAGE - (gml + bcul)
87 < (int)(CONVERT_TO_PERCENTAGE *
88 HConstants.HBASE_CLUSTER_MINIMUM_MEMORY_THRESHOLD)) {
89 throw new RuntimeException(
90 "Current heap configuration for MemStore and BlockCache exceeds
" +
91 "the threshold required for successful cluster operation. " +
92 "The combined value cannot exceed 0.8. Please check " +
93 "the settings for hbase.regionserver.global.memstore.upperLimit
and " +
94 "hfile.block.cache.size in your configuration. " +
95 "hbase.regionserver.global.memstore.upperLimit is " +
96 globalMemstoreLimit +
97 " hfile.block.cache.size is " + blockCacheUpperLimit);
98 }
99 }
{code}
Hardcoding 0.8 doesn't make much sense in a heap of 100G+ (that is 20G over for
hbase itself -- more than enough).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)