[ 
https://issues.apache.org/jira/browse/HBASE-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581260#comment-14581260
 ] 

Hadoop QA commented on HBASE-13876:
-----------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12738923/HBASE-13876-v4.patch
  against master branch at commit 0f93986015122a517e1a0c949e159bf8fb218092.
  ATTACHMENT ID: 12738923

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

    {color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

                {color:red}-1 checkstyle{color}.  The applied patch generated 
1913 checkstyle errors (more than the master's current 1912 errors).

    {color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

     {color:red}-1 core tests{color}.  The patch failed these unit tests:
     

     {color:red}-1 core zombie tests{color}.  There are 4 zombie test(s):       
at 
org.apache.hadoop.hbase.coprocessor.TestCoprocessorInterface.testSharedData(TestCoprocessorInterface.java:290)
        at 
org.apache.hadoop.hbase.replication.regionserver.TestReplicationWALReaderManager.test(TestReplicationWALReaderManager.java:181)
        at 
org.apache.hadoop.hbase.wal.TestWALSplit.testLogsGetArchivedAfterSplit(TestWALSplit.java:654)
        at 
org.apache.hadoop.hbase.wal.TestWALSplit.testSplitLogFileFirstLineCorruptionLog(TestWALSplit.java:1056)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14373//testReport/
Release Findbugs (version 2.0.3)        warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14373//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14373//artifact/patchprocess/checkstyle-aggregate.html

                Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14373//console

This message is automatically generated.

> Improving performance of HeapMemoryManager
> ------------------------------------------
>
>                 Key: HBASE-13876
>                 URL: https://issues.apache.org/jira/browse/HBASE-13876
>             Project: HBase
>          Issue Type: Improvement
>          Components: hbase, regionserver
>    Affects Versions: 2.0.0, 1.0.1, 1.1.0, 1.1.1
>            Reporter: Abhilash
>            Assignee: Abhilash
>            Priority: Minor
>         Attachments: HBASE-13876-v2.patch, HBASE-13876-v3.patch, 
> HBASE-13876-v4.patch, HBASE-13876.patch
>
>
> I am trying to improve the performance of DefaultHeapMemoryTuner by 
> introducing some more checks. The current checks under which the 
> DefaultHeapMemoryTuner works are very rare so I am trying to weaken these 
> checks to improve its performance.
> Check current memstore size and current block cache size. If we are using 
> less than 50% of currently available block cache size  we say block cache is 
> sufficient and same for memstore. This check will be very effective when 
> server is either load heavy or write heavy. Earlier version just waited for 
> number of evictions / number of flushes to be zero which are very rare.
> Otherwise based on percent change in number of cache misses and number of 
> flushes we increase / decrease memory provided for caching / memstore. After 
> doing so, on next call of HeapMemoryTuner we verify that last change has 
> indeed decreased number of evictions / flush ( combined). I am doing this 
> analysis by comparing percent change (which is basically nothing but 
> normalized derivative) of number of evictions and number of flushes during 
> last two periods. The main motive for doing this was that if we have random 
> reads then we will be having a lot of cache misses. But even after increasing 
> block cache we wont be able to decrease number of cache misses and we will 
> revert back and eventually we will not waste memory on block caches. This 
> will also help us ignore random short term spikes in reads / writes.
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to