pustota2009 commented on a change in pull request #1257:
URL: https://github.com/apache/hbase/pull/1257#discussion_r437494471



##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
##########
@@ -153,6 +153,22 @@
   private static final String LRU_MAX_BLOCK_SIZE = "hbase.lru.max.block.size";
   private static final long DEFAULT_MAX_BLOCK_SIZE = 16L * 1024L * 1024L;
 
+  private static final String LRU_CACHE_HEAVY_EVICTION_COUNT_LIMIT
+    = "hbase.lru.cache.heavy.eviction.count.limit";
+  // Default value actually equal to disable feature of increasing performance.
+  // Because 2147483647 is about ~680 years (after that it will start to work)
+  // We can set it to 0-10 and get the profit right now.
+  // (see details https://issues.apache.org/jira/browse/HBASE-23887).
+  private static final int DEFAULT_LRU_CACHE_HEAVY_EVICTION_COUNT_LIMIT = 
2147483647;

Review comment:
       fixed

##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
##########
@@ -947,7 +1031,94 @@ public void run() {
         }
         LruBlockCache cache = this.cache.get();
         if (cache == null) break;
-        cache.evict();
+        bytesFreed = cache.evict();
+        /* 
+        * Sometimes we are reading more data than can fit into BlockCache
+        * and it is the cause a high rate of evictions.
+        * This in turn leads to heavy Garbage Collector works.
+        * So a lot of blocks put into BlockCache but never read,
+        * but spending a lot of CPU resources.
+        * Here we will analyze how many bytes were freed and decide
+        * decide whether the time has come to reduce amount of caching blocks.
+        * It help avoid put too many blocks into BlockCache
+        * when evict() works very active and save CPU for other jobs.
+        * More delails: https://issues.apache.org/jira/browse/HBASE-23887
+        */
+        
+        // First of all we have to control how much time 
+        // has passed since previuos evict() was launched
+        // This is should be almost the same time (+/- 10s)
+        // because we get comparable volumes of freed bytes each time.
+        // 10s because this is default period to run evict() (see above 
this.wait)
+        long stopTime = System.currentTimeMillis();
+        if (stopTime - startTime <= 1000 * 10 - 1) {
+          mbFreedSum += bytesFreed/1024/1024;
+        } else {
+          // Here we have to calc what situation we have got.
+          // We have the limit 
"hbase.lru.cache.heavy.eviction.bytes.size.limit"
+          // and can calculte overhead on it.
+          // We will use this information to decide, 
+          // how to change percent of caching blocks.
+          freedDataOverheadPercent =
+            (int) (mbFreedSum * 100 / cache.heavyEvictionMbSizeLimit) - 100;
+          if (freedDataOverheadPercent > 0) {
+            // Now we are in the situation when we are above the limit
+            // But maybe we are going to ignore it because it will end quite 
soon
+            heavyEvictionCount++;
+            if (heavyEvictionCount > cache.heavyEvictionCountLimit) {
+              // It is going for a long time and we have to reduce of caching
+              // blocks now. So we calculate here how many blocks we want to 
skip.
+              // It depends on: 
+              // 1. Overhead - if overhead is big we could more aggressive
+              // reducing amount of caching blocks.
+              // 2. How fast we want to get the result. If we know that our 
+              // heavy reading for a long time, we don't want to wait and can 
+              // increase the coefficient and get good performance quite soon.
+              // But if we don't sure we can do it slowly and it could prevent 
+              // premature exit from this mode. So, when the coefficient is 
+              // higher we can get better performance when heavy reading is 
stable.
+              // But when reading is changing we can adjust to it and set 
+              // the coefficient to lower value.
+              int ch = (int) (freedDataOverheadPercent * 
cache.heavyEvictionOverheadCoefficient);

Review comment:
       fixed

##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
##########
@@ -947,7 +1031,94 @@ public void run() {
         }
         LruBlockCache cache = this.cache.get();
         if (cache == null) break;
-        cache.evict();
+        bytesFreed = cache.evict();
+        /* 
+        * Sometimes we are reading more data than can fit into BlockCache
+        * and it is the cause a high rate of evictions.
+        * This in turn leads to heavy Garbage Collector works.
+        * So a lot of blocks put into BlockCache but never read,
+        * but spending a lot of CPU resources.
+        * Here we will analyze how many bytes were freed and decide
+        * decide whether the time has come to reduce amount of caching blocks.
+        * It help avoid put too many blocks into BlockCache
+        * when evict() works very active and save CPU for other jobs.
+        * More delails: https://issues.apache.org/jira/browse/HBASE-23887
+        */
+        
+        // First of all we have to control how much time 
+        // has passed since previuos evict() was launched
+        // This is should be almost the same time (+/- 10s)
+        // because we get comparable volumes of freed bytes each time.
+        // 10s because this is default period to run evict() (see above 
this.wait)
+        long stopTime = System.currentTimeMillis();
+        if (stopTime - startTime <= 1000 * 10 - 1) {
+          mbFreedSum += bytesFreed/1024/1024;
+        } else {
+          // Here we have to calc what situation we have got.
+          // We have the limit 
"hbase.lru.cache.heavy.eviction.bytes.size.limit"
+          // and can calculte overhead on it.
+          // We will use this information to decide, 
+          // how to change percent of caching blocks.
+          freedDataOverheadPercent =
+            (int) (mbFreedSum * 100 / cache.heavyEvictionMbSizeLimit) - 100;
+          if (freedDataOverheadPercent > 0) {
+            // Now we are in the situation when we are above the limit
+            // But maybe we are going to ignore it because it will end quite 
soon
+            heavyEvictionCount++;
+            if (heavyEvictionCount > cache.heavyEvictionCountLimit) {
+              // It is going for a long time and we have to reduce of caching
+              // blocks now. So we calculate here how many blocks we want to 
skip.
+              // It depends on: 
+              // 1. Overhead - if overhead is big we could more aggressive
+              // reducing amount of caching blocks.
+              // 2. How fast we want to get the result. If we know that our 
+              // heavy reading for a long time, we don't want to wait and can 
+              // increase the coefficient and get good performance quite soon.
+              // But if we don't sure we can do it slowly and it could prevent 
+              // premature exit from this mode. So, when the coefficient is 
+              // higher we can get better performance when heavy reading is 
stable.
+              // But when reading is changing we can adjust to it and set 
+              // the coefficient to lower value.
+              int ch = (int) (freedDataOverheadPercent * 
cache.heavyEvictionOverheadCoefficient);
+              // But practice shows that 15% of reducing is quite enough.
+              // We are not greedy (it could lead to premature exit).
+              ch = ch > 15 ? 15 : ch;

Review comment:
       Almost always few first times more than 15%. I think it is enough  
because after 30 sec will be not more 55% and this is any way very good for 
performance. And no risk of overshooting.

##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
##########
@@ -947,7 +1031,94 @@ public void run() {
         }
         LruBlockCache cache = this.cache.get();
         if (cache == null) break;
-        cache.evict();
+        bytesFreed = cache.evict();
+        /* 
+        * Sometimes we are reading more data than can fit into BlockCache
+        * and it is the cause a high rate of evictions.
+        * This in turn leads to heavy Garbage Collector works.
+        * So a lot of blocks put into BlockCache but never read,
+        * but spending a lot of CPU resources.
+        * Here we will analyze how many bytes were freed and decide
+        * decide whether the time has come to reduce amount of caching blocks.
+        * It help avoid put too many blocks into BlockCache
+        * when evict() works very active and save CPU for other jobs.
+        * More delails: https://issues.apache.org/jira/browse/HBASE-23887
+        */
+        
+        // First of all we have to control how much time 
+        // has passed since previuos evict() was launched
+        // This is should be almost the same time (+/- 10s)
+        // because we get comparable volumes of freed bytes each time.
+        // 10s because this is default period to run evict() (see above 
this.wait)
+        long stopTime = System.currentTimeMillis();
+        if (stopTime - startTime <= 1000 * 10 - 1) {
+          mbFreedSum += bytesFreed/1024/1024;
+        } else {
+          // Here we have to calc what situation we have got.
+          // We have the limit 
"hbase.lru.cache.heavy.eviction.bytes.size.limit"
+          // and can calculte overhead on it.
+          // We will use this information to decide, 
+          // how to change percent of caching blocks.
+          freedDataOverheadPercent =
+            (int) (mbFreedSum * 100 / cache.heavyEvictionMbSizeLimit) - 100;
+          if (freedDataOverheadPercent > 0) {
+            // Now we are in the situation when we are above the limit
+            // But maybe we are going to ignore it because it will end quite 
soon
+            heavyEvictionCount++;
+            if (heavyEvictionCount > cache.heavyEvictionCountLimit) {
+              // It is going for a long time and we have to reduce of caching
+              // blocks now. So we calculate here how many blocks we want to 
skip.
+              // It depends on: 
+              // 1. Overhead - if overhead is big we could more aggressive
+              // reducing amount of caching blocks.
+              // 2. How fast we want to get the result. If we know that our 
+              // heavy reading for a long time, we don't want to wait and can 
+              // increase the coefficient and get good performance quite soon.
+              // But if we don't sure we can do it slowly and it could prevent 
+              // premature exit from this mode. So, when the coefficient is 
+              // higher we can get better performance when heavy reading is 
stable.
+              // But when reading is changing we can adjust to it and set 
+              // the coefficient to lower value.
+              int ch = (int) (freedDataOverheadPercent * 
cache.heavyEvictionOverheadCoefficient);

Review comment:
       >>Well, I think this is convoluted.
   Yes, made some refactoring, hope now is better

##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
##########
@@ -947,7 +1031,94 @@ public void run() {
         }
         LruBlockCache cache = this.cache.get();
         if (cache == null) break;
-        cache.evict();
+        bytesFreed = cache.evict();
+        /* 
+        * Sometimes we are reading more data than can fit into BlockCache
+        * and it is the cause a high rate of evictions.
+        * This in turn leads to heavy Garbage Collector works.
+        * So a lot of blocks put into BlockCache but never read,
+        * but spending a lot of CPU resources.
+        * Here we will analyze how many bytes were freed and decide
+        * decide whether the time has come to reduce amount of caching blocks.
+        * It help avoid put too many blocks into BlockCache
+        * when evict() works very active and save CPU for other jobs.
+        * More delails: https://issues.apache.org/jira/browse/HBASE-23887
+        */
+        
+        // First of all we have to control how much time 
+        // has passed since previuos evict() was launched
+        // This is should be almost the same time (+/- 10s)
+        // because we get comparable volumes of freed bytes each time.
+        // 10s because this is default period to run evict() (see above 
this.wait)
+        long stopTime = System.currentTimeMillis();
+        if (stopTime - startTime <= 1000 * 10 - 1) {
+          mbFreedSum += bytesFreed/1024/1024;
+        } else {
+          // Here we have to calc what situation we have got.
+          // We have the limit 
"hbase.lru.cache.heavy.eviction.bytes.size.limit"
+          // and can calculte overhead on it.
+          // We will use this information to decide, 
+          // how to change percent of caching blocks.
+          freedDataOverheadPercent =
+            (int) (mbFreedSum * 100 / cache.heavyEvictionMbSizeLimit) - 100;
+          if (freedDataOverheadPercent > 0) {
+            // Now we are in the situation when we are above the limit
+            // But maybe we are going to ignore it because it will end quite 
soon
+            heavyEvictionCount++;
+            if (heavyEvictionCount > cache.heavyEvictionCountLimit) {
+              // It is going for a long time and we have to reduce of caching
+              // blocks now. So we calculate here how many blocks we want to 
skip.
+              // It depends on: 
+              // 1. Overhead - if overhead is big we could more aggressive
+              // reducing amount of caching blocks.
+              // 2. How fast we want to get the result. If we know that our 
+              // heavy reading for a long time, we don't want to wait and can 
+              // increase the coefficient and get good performance quite soon.
+              // But if we don't sure we can do it slowly and it could prevent 
+              // premature exit from this mode. So, when the coefficient is 
+              // higher we can get better performance when heavy reading is 
stable.
+              // But when reading is changing we can adjust to it and set 
+              // the coefficient to lower value.
+              int ch = (int) (freedDataOverheadPercent * 
cache.heavyEvictionOverheadCoefficient);

Review comment:
       >Well, I think this is convoluted.
   
   Yes, made some refactoring, hope now is better

##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
##########
@@ -947,7 +1031,94 @@ public void run() {
         }
         LruBlockCache cache = this.cache.get();
         if (cache == null) break;
-        cache.evict();
+        bytesFreed = cache.evict();
+        /* 
+        * Sometimes we are reading more data than can fit into BlockCache
+        * and it is the cause a high rate of evictions.
+        * This in turn leads to heavy Garbage Collector works.
+        * So a lot of blocks put into BlockCache but never read,
+        * but spending a lot of CPU resources.
+        * Here we will analyze how many bytes were freed and decide
+        * decide whether the time has come to reduce amount of caching blocks.
+        * It help avoid put too many blocks into BlockCache
+        * when evict() works very active and save CPU for other jobs.
+        * More delails: https://issues.apache.org/jira/browse/HBASE-23887
+        */
+        
+        // First of all we have to control how much time 
+        // has passed since previuos evict() was launched
+        // This is should be almost the same time (+/- 10s)
+        // because we get comparable volumes of freed bytes each time.
+        // 10s because this is default period to run evict() (see above 
this.wait)
+        long stopTime = System.currentTimeMillis();
+        if (stopTime - startTime <= 1000 * 10 - 1) {
+          mbFreedSum += bytesFreed/1024/1024;
+        } else {
+          // Here we have to calc what situation we have got.
+          // We have the limit 
"hbase.lru.cache.heavy.eviction.bytes.size.limit"
+          // and can calculte overhead on it.
+          // We will use this information to decide, 
+          // how to change percent of caching blocks.
+          freedDataOverheadPercent =
+            (int) (mbFreedSum * 100 / cache.heavyEvictionMbSizeLimit) - 100;
+          if (freedDataOverheadPercent > 0) {
+            // Now we are in the situation when we are above the limit
+            // But maybe we are going to ignore it because it will end quite 
soon
+            heavyEvictionCount++;
+            if (heavyEvictionCount > cache.heavyEvictionCountLimit) {
+              // It is going for a long time and we have to reduce of caching
+              // blocks now. So we calculate here how many blocks we want to 
skip.
+              // It depends on: 
+              // 1. Overhead - if overhead is big we could more aggressive
+              // reducing amount of caching blocks.
+              // 2. How fast we want to get the result. If we know that our 
+              // heavy reading for a long time, we don't want to wait and can 
+              // increase the coefficient and get good performance quite soon.
+              // But if we don't sure we can do it slowly and it could prevent 
+              // premature exit from this mode. So, when the coefficient is 
+              // higher we can get better performance when heavy reading is 
stable.
+              // But when reading is changing we can adjust to it and set 
+              // the coefficient to lower value.
+              int ch = (int) (freedDataOverheadPercent * 
cache.heavyEvictionOverheadCoefficient);
+              // But practice shows that 15% of reducing is quite enough.
+              // We are not greedy (it could lead to premature exit).
+              ch = ch > 15 ? 15 : ch;
+              ch = ch < 0 ? 0 : ch; // I think it will never happen but check 
for sure
+              // So this is the key point, here we are reducing % of caching 
blocks
+              cache.cacheDataBlockPercent -= ch;
+              // If we go down too deep we have to stop here, 1% any way 
should be.
+              cache.cacheDataBlockPercent =
+                cache.cacheDataBlockPercent < 1 ? 1 : 
cache.cacheDataBlockPercent;
+            }
+          } else {
+            // Well, we have got overshooting. 
+            // Mayby it is just short-term fluctuation and we can stay in this 
mode.
+            // It help avoid permature exit during short-term fluctuation.
+            // If overshooting less than 90%, we will try to increase the 
percent of 
+            // caching blocks and hope it is enough.
+            if (mbFreedSum >= cache.heavyEvictionMbSizeLimit * 0.1) {
+              // Simple logic: more overshooting - more caching blocks 
(backpressure)
+              int ch = (int) (-freedDataOverheadPercent * 0.1 + 1);

Review comment:
       We need some method to make back pressure and it is good link amount of 
back pressure with overshooting. I meant more overshooting, more back pressure. 
But not too much. It is kind of best practice because always works good. I 
thought maybe give control it via parameter, but decided it will be too 
complicated. I am not sure how to simplify it.

##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
##########
@@ -947,7 +1031,94 @@ public void run() {
         }
         LruBlockCache cache = this.cache.get();
         if (cache == null) break;
-        cache.evict();
+        bytesFreed = cache.evict();
+        /* 
+        * Sometimes we are reading more data than can fit into BlockCache
+        * and it is the cause a high rate of evictions.
+        * This in turn leads to heavy Garbage Collector works.
+        * So a lot of blocks put into BlockCache but never read,
+        * but spending a lot of CPU resources.
+        * Here we will analyze how many bytes were freed and decide
+        * decide whether the time has come to reduce amount of caching blocks.
+        * It help avoid put too many blocks into BlockCache
+        * when evict() works very active and save CPU for other jobs.
+        * More delails: https://issues.apache.org/jira/browse/HBASE-23887
+        */
+        
+        // First of all we have to control how much time 
+        // has passed since previuos evict() was launched
+        // This is should be almost the same time (+/- 10s)
+        // because we get comparable volumes of freed bytes each time.
+        // 10s because this is default period to run evict() (see above 
this.wait)
+        long stopTime = System.currentTimeMillis();
+        if (stopTime - startTime <= 1000 * 10 - 1) {
+          mbFreedSum += bytesFreed/1024/1024;
+        } else {
+          // Here we have to calc what situation we have got.
+          // We have the limit 
"hbase.lru.cache.heavy.eviction.bytes.size.limit"
+          // and can calculte overhead on it.
+          // We will use this information to decide, 
+          // how to change percent of caching blocks.
+          freedDataOverheadPercent =
+            (int) (mbFreedSum * 100 / cache.heavyEvictionMbSizeLimit) - 100;
+          if (freedDataOverheadPercent > 0) {
+            // Now we are in the situation when we are above the limit
+            // But maybe we are going to ignore it because it will end quite 
soon
+            heavyEvictionCount++;
+            if (heavyEvictionCount > cache.heavyEvictionCountLimit) {
+              // It is going for a long time and we have to reduce of caching
+              // blocks now. So we calculate here how many blocks we want to 
skip.
+              // It depends on: 
+              // 1. Overhead - if overhead is big we could more aggressive
+              // reducing amount of caching blocks.
+              // 2. How fast we want to get the result. If we know that our 
+              // heavy reading for a long time, we don't want to wait and can 
+              // increase the coefficient and get good performance quite soon.
+              // But if we don't sure we can do it slowly and it could prevent 
+              // premature exit from this mode. So, when the coefficient is 
+              // higher we can get better performance when heavy reading is 
stable.
+              // But when reading is changing we can adjust to it and set 
+              // the coefficient to lower value.
+              int ch = (int) (freedDataOverheadPercent * 
cache.heavyEvictionOverheadCoefficient);
+              // But practice shows that 15% of reducing is quite enough.
+              // We are not greedy (it could lead to premature exit).
+              ch = ch > 15 ? 15 : ch;
+              ch = ch < 0 ? 0 : ch; // I think it will never happen but check 
for sure
+              // So this is the key point, here we are reducing % of caching 
blocks
+              cache.cacheDataBlockPercent -= ch;
+              // If we go down too deep we have to stop here, 1% any way 
should be.
+              cache.cacheDataBlockPercent =
+                cache.cacheDataBlockPercent < 1 ? 1 : 
cache.cacheDataBlockPercent;
+            }
+          } else {
+            // Well, we have got overshooting. 
+            // Mayby it is just short-term fluctuation and we can stay in this 
mode.
+            // It help avoid permature exit during short-term fluctuation.
+            // If overshooting less than 90%, we will try to increase the 
percent of 
+            // caching blocks and hope it is enough.
+            if (mbFreedSum >= cache.heavyEvictionMbSizeLimit * 0.1) {

Review comment:
       If reading fall too much it looks like it over and somewhere  we have to 
give up (exit from this mode). I thought 90% is quite enough, but it could be 
95% and 80%. We have an option - can give control all of it via new parameters. 
On the  other hand I affraid of users could be embarased and spend a lot of 
time try to find out what to do.

##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
##########
@@ -947,7 +1031,94 @@ public void run() {
         }
         LruBlockCache cache = this.cache.get();
         if (cache == null) break;
-        cache.evict();
+        bytesFreed = cache.evict();
+        /* 
+        * Sometimes we are reading more data than can fit into BlockCache
+        * and it is the cause a high rate of evictions.
+        * This in turn leads to heavy Garbage Collector works.
+        * So a lot of blocks put into BlockCache but never read,
+        * but spending a lot of CPU resources.
+        * Here we will analyze how many bytes were freed and decide
+        * decide whether the time has come to reduce amount of caching blocks.
+        * It help avoid put too many blocks into BlockCache
+        * when evict() works very active and save CPU for other jobs.
+        * More delails: https://issues.apache.org/jira/browse/HBASE-23887
+        */
+        
+        // First of all we have to control how much time 
+        // has passed since previuos evict() was launched
+        // This is should be almost the same time (+/- 10s)
+        // because we get comparable volumes of freed bytes each time.
+        // 10s because this is default period to run evict() (see above 
this.wait)
+        long stopTime = System.currentTimeMillis();
+        if (stopTime - startTime <= 1000 * 10 - 1) {
+          mbFreedSum += bytesFreed/1024/1024;
+        } else {
+          // Here we have to calc what situation we have got.
+          // We have the limit 
"hbase.lru.cache.heavy.eviction.bytes.size.limit"
+          // and can calculte overhead on it.
+          // We will use this information to decide, 
+          // how to change percent of caching blocks.
+          freedDataOverheadPercent =
+            (int) (mbFreedSum * 100 / cache.heavyEvictionMbSizeLimit) - 100;
+          if (freedDataOverheadPercent > 0) {
+            // Now we are in the situation when we are above the limit
+            // But maybe we are going to ignore it because it will end quite 
soon
+            heavyEvictionCount++;
+            if (heavyEvictionCount > cache.heavyEvictionCountLimit) {
+              // It is going for a long time and we have to reduce of caching
+              // blocks now. So we calculate here how many blocks we want to 
skip.
+              // It depends on: 
+              // 1. Overhead - if overhead is big we could more aggressive
+              // reducing amount of caching blocks.
+              // 2. How fast we want to get the result. If we know that our 
+              // heavy reading for a long time, we don't want to wait and can 
+              // increase the coefficient and get good performance quite soon.
+              // But if we don't sure we can do it slowly and it could prevent 
+              // premature exit from this mode. So, when the coefficient is 
+              // higher we can get better performance when heavy reading is 
stable.
+              // But when reading is changing we can adjust to it and set 
+              // the coefficient to lower value.
+              int ch = (int) (freedDataOverheadPercent * 
cache.heavyEvictionOverheadCoefficient);
+              // But practice shows that 15% of reducing is quite enough.
+              // We are not greedy (it could lead to premature exit).
+              ch = ch > 15 ? 15 : ch;
+              ch = ch < 0 ? 0 : ch; // I think it will never happen but check 
for sure
+              // So this is the key point, here we are reducing % of caching 
blocks
+              cache.cacheDataBlockPercent -= ch;
+              // If we go down too deep we have to stop here, 1% any way 
should be.
+              cache.cacheDataBlockPercent =
+                cache.cacheDataBlockPercent < 1 ? 1 : 
cache.cacheDataBlockPercent;
+            }
+          } else {
+            // Well, we have got overshooting. 
+            // Mayby it is just short-term fluctuation and we can stay in this 
mode.
+            // It help avoid permature exit during short-term fluctuation.
+            // If overshooting less than 90%, we will try to increase the 
percent of 
+            // caching blocks and hope it is enough.
+            if (mbFreedSum >= cache.heavyEvictionMbSizeLimit * 0.1) {
+              // Simple logic: more overshooting - more caching blocks 
(backpressure)
+              int ch = (int) (-freedDataOverheadPercent * 0.1 + 1);
+              cache.cacheDataBlockPercent += ch;
+              // But it can't be more then 100%, so check it.
+              cache.cacheDataBlockPercent =

Review comment:
       done

##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
##########
@@ -947,7 +1019,47 @@ public void run() {
         }
         LruBlockCache cache = this.cache.get();
         if (cache == null) break;
-        cache.evict();
+        bytesFreed = cache.evict();
+        long stopTime = System.currentTimeMillis();
+        // If heavy cleaning BlockCache control.
+        // It helps avoid put too many blocks into BlockCache
+        // when evict() works very active.
+        if (stopTime - startTime <= 1000 * 10 - 1) {
+          mbFreedSum += bytesFreed/1024/1024;
+        } else {
+          freedDataOverheadPercent =

Review comment:
       >if (mbFreedSum > heavyEvictionMbSizeLimit)
   
   ok, will fix it soon




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to