[ 
https://issues.apache.org/jira/browse/HBASE-7404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7404:
--------------------------

     Description: 
First, thanks @neil from Fusion-IO share the source code.

Usage:

1.Use bucket cache as main memory cache, configured as the following:
–"hbase.bucketcache.ioengine" "heap"
–"hbase.bucketcache.size" 0.4 (size for bucket cache, 0.4 is a percentage of 
max heap size)

2.Use bucket cache as a secondary cache, configured as the following:
–"hbase.bucketcache.ioengine" "file:/disk1/hbase/cache.data"(The file path 
where to store the block data)
–"hbase.bucketcache.size" 1024 (size for bucket cache, unit is MB, so 1024 
means 1GB)
–"hbase.bucketcache.combinedcache.enabled" false (default value being true)

See more configurations from org.apache.hadoop.hbase.io.hfile.CacheConfig and 
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache


What's Bucket Cache? 
It could greatly decrease CMS and heap fragment by GC
It support a large cache space for High Read Performance by using high speed 
disk like Fusion-io

1.An implementation of block cache like LruBlockCache
2.Self manage blocks' storage position through Bucket Allocator
3.The cached blocks could be stored in the memory or file system
4.Bucket Cache could be used as a mainly block cache(see CombinedBlockCache), 
combined with LruBlockCache to decrease CMS and fragment by GC.
5.BucketCache also could be used as a secondary cache(e.g. using Fusionio to 
store block) to enlarge cache space


How about SlabCache?
We have studied and test SlabCache first, but the result is bad, because:
1.SlabCache use SingleSizeCache, its use ratio of memory is low because kinds 
of block size, especially using DataBlockEncoding
2.SlabCache is uesd in DoubleBlockCache, block is cached both in SlabCache and 
LruBlockCache, put the block to LruBlockCache again if hit in SlabCache , it 
causes CMS and heap fragment don't get any better
3.Direct heap performance is not good as heap, and maybe cause OOM, so we 
recommend using "heap" engine 

See more in the attachment and in the patch



  was:
First, thanks @neil from Fusion-IO share the source code.

Usage:

1.Use bucket cache as a mainly memory cache, configure as the following:
–"hbase.bucketcache.ioengine" "heap"
–"hbase.bucketcache.size" 0.4 (size for bucket cache, 0.4 is a percentage of 
max heap size)

2.Use bucket cache as a secondary cache, configure as the following:
–"hbase.bucketcache.ioengine" "file:/disk1/hbase/cache.data"(The file path 
where to store the block data)
–"hbase.bucketcache.size" 1024 (size for bucket cache, unit is MB, so 1024 
means 1GB)
–"hbase.bucketcache.combinedcache.enabled" false

See more configurations from org.apache.hadoop.hbase.io.hfile.CacheConfig and 
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache


What's Bucket Cache? 
It could greatly decrease CMS and heap fragment by GC
It support a large cache space for High Read Performance by using high speed 
disk like Fusion-io

1.An implementation of block cache like LruBlockCache
2.Self manage blocks' storage position through Bucket Allocator
3.The cached blocks could be stored in the memory or file system
4.Bucket Cache could be used as a mainly block cache(see CombinedBlockCache), 
combined with LruBlockCache to decrease CMS and fragment by GC.
5.BucketCache also could be used as a secondary cache(e.g. using Fusionio to 
store block) to enlarge cache space


How about SlabCache?
We have studied and test SlabCache first, but the result is bad, because:
1.SlabCache use SingleSizeCache, its use ratio of memory is low because kinds 
of block size, especially using DataBlockEncoding
2.SlabCache is uesd in DoubleBlockCache, block is cached both in SlabCache and 
LruBlockCache, put the block to LruBlockCache again if hit in SlabCache , it 
causes CMS and heap fragment don't get any better
3.Direct heap performance is not good as heap, and maybe cause OOM, so we 
recommend using "heap" engine 

See more in the attachment and in the patch



    Release Note: 
BucketCache is another implementation of BlockCache which supports big block 
cache for high performance and would greatly decrease CMS and heap 
fragmentation in JVM caused by read activities.


Usage:

1.Use bucket cache as main memory cache, configured as the following:
–"hbase.bucketcache.ioengine" "heap"
–"hbase.bucketcache.size" 0.4 (size for bucket cache, 0.4 is a percentage of 
max heap size)

2.Use bucket cache as a secondary cache, configured as the following:
–"hbase.bucketcache.ioengine" "file:/disk1/hbase/cache.data"(The file path 
where to store the block data)
–"hbase.bucketcache.size" 1024 (size for bucket cache, unit is MB, so 1024 
means 1GB)
–"hbase.bucketcache.combinedcache.enabled" false (default value being true)


  was:BucketCache is another implementation of BlockCache, which supports big 
block cache for high performance and could greatly decrease CMS and heap 
fragment caused by reading.

    
> Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE
> ----------------------------------------------------------------------
>
>                 Key: HBASE-7404
>                 URL: https://issues.apache.org/jira/browse/HBASE-7404
>             Project: HBase
>          Issue Type: New Feature
>    Affects Versions: 0.94.3
>            Reporter: chunhui shen
>            Assignee: chunhui shen
>             Fix For: 0.96.0, 0.94.5
>
>         Attachments: 7404-trunk-v10.patch, 7404-trunk-v11.patch, 
> 7404-trunk-v12.patch, 7404-trunk-v13.patch, 7404-trunk-v13.txt, 
> 7404-trunk-v14.patch, BucketCache.pdf, hbase-7404-94v2.patch, 
> hbase-7404-trunkv2.patch, hbase-7404-trunkv9.patch, Introduction of Bucket 
> Cache.pdf
>
>
> First, thanks @neil from Fusion-IO share the source code.
> Usage:
> 1.Use bucket cache as main memory cache, configured as the following:
> –"hbase.bucketcache.ioengine" "heap"
> –"hbase.bucketcache.size" 0.4 (size for bucket cache, 0.4 is a percentage of 
> max heap size)
> 2.Use bucket cache as a secondary cache, configured as the following:
> –"hbase.bucketcache.ioengine" "file:/disk1/hbase/cache.data"(The file path 
> where to store the block data)
> –"hbase.bucketcache.size" 1024 (size for bucket cache, unit is MB, so 1024 
> means 1GB)
> –"hbase.bucketcache.combinedcache.enabled" false (default value being true)
> See more configurations from org.apache.hadoop.hbase.io.hfile.CacheConfig and 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache
> What's Bucket Cache? 
> It could greatly decrease CMS and heap fragment by GC
> It support a large cache space for High Read Performance by using high speed 
> disk like Fusion-io
> 1.An implementation of block cache like LruBlockCache
> 2.Self manage blocks' storage position through Bucket Allocator
> 3.The cached blocks could be stored in the memory or file system
> 4.Bucket Cache could be used as a mainly block cache(see CombinedBlockCache), 
> combined with LruBlockCache to decrease CMS and fragment by GC.
> 5.BucketCache also could be used as a secondary cache(e.g. using Fusionio to 
> store block) to enlarge cache space
> How about SlabCache?
> We have studied and test SlabCache first, but the result is bad, because:
> 1.SlabCache use SingleSizeCache, its use ratio of memory is low because kinds 
> of block size, especially using DataBlockEncoding
> 2.SlabCache is uesd in DoubleBlockCache, block is cached both in SlabCache 
> and LruBlockCache, put the block to LruBlockCache again if hit in SlabCache , 
> it causes CMS and heap fragment don't get any better
> 3.Direct heap performance is not good as heap, and maybe cause OOM, so we 
> recommend using "heap" engine 
> See more in the attachment and in the patch

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to