[ https://issues.apache.org/jira/browse/HBASE-7404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13542149#comment-13542149 ]
Hadoop QA commented on HBASE-7404: ---------------------------------- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12562905/7404-trunk-v12.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 18 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 1 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.regionserver.TestSplitTransaction {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): at org.apache.hadoop.hbase.master.TestOpenedRegionHandler.testOpenedRegionHandlerOnMasterRestart(TestOpenedRegionHandler.java:104) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//console This message is automatically generated. > Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE > ---------------------------------------------------------------------- > > Key: HBASE-7404 > URL: https://issues.apache.org/jira/browse/HBASE-7404 > Project: HBase > Issue Type: New Feature > Affects Versions: 0.94.3 > Reporter: chunhui shen > Assignee: chunhui shen > Fix For: 0.96.0, 0.94.5 > > Attachments: 7404-trunk-v10.patch, 7404-trunk-v11.patch, > 7404-trunk-v12.patch, BucketCache.pdf, hbase-7404-94v2.patch, > hbase-7404-trunkv2.patch, hbase-7404-trunkv9.patch, Introduction of Bucket > Cache.pdf > > > First, thanks @neil from Fusion-IO share the source code. > What's Bucket Cache? > It could greatly decrease CMS and heap fragment by GC > It support a large cache space for High Read Performance by using high speed > disk like Fusion-io > 1.An implementation of block cache like LruBlockCache > 2.Self manage blocks' storage position through Bucket Allocator > 3.The cached blocks could be stored in the memory or file system > 4.Bucket Cache could be used as a mainly block cache(see CombinedBlockCache), > combined with LruBlockCache to decrease CMS and fragment by GC. > 5.BucketCache also could be used as a secondary cache(e.g. using Fusionio to > store block) to enlarge cache space > How about SlabCache? > We have studied and test SlabCache first, but the result is bad, because: > 1.SlabCache use SingleSizeCache, its use ratio of memory is low because kinds > of block size, especially using DataBlockEncoding > 2.SlabCache is uesd in DoubleBlockCache, block is cached both in SlabCache > and LruBlockCache, put the block to LruBlockCache again if hit in SlabCache , > it causes CMS and heap fragment don't get any better > 3.Direct heap performance is not good as heap, and maybe cause OOM, so we > recommend using "heap" engine > See more in the attachment and in the patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira