[ 
https://issues.apache.org/jira/browse/HBASE-21874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773962#comment-16773962
 ] 

Wellington Chevreuil commented on HBASE-21874:
----------------------------------------------

Thanks for the detailed explanations, [~ram_krish] and [~anoop.hbase]!

For the record, while testing latest patch that removed the configurable buffer 
size, got a "Map failed" OOME on java.nio.FileChannelImpl, while trying to map 
a 490GB pmem device. This is due linux default max number of map handlers 
defined by *max_map_count* being currently 64K. With a fixed buffer size of 
4MB, that will limit the pmem capacity to 256GB, so we need to expand that 
limit, for example, to use a 500GB pmem device, at least 128k handlers are 
needed:
{noformat}
sysctl -w vm.max_map_count=130000
{noformat}

> Bucket cache on Persistent memory
> ---------------------------------
>
>                 Key: HBASE-21874
>                 URL: https://issues.apache.org/jira/browse/HBASE-21874
>             Project: HBase
>          Issue Type: New Feature
>          Components: BucketCache
>    Affects Versions: 3.0.0
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>            Priority: Major
>             Fix For: 3.0.0
>
>         Attachments: HBASE-21874.patch, HBASE-21874.patch, 
> HBASE-21874_V2.patch, Pmem_BC.png
>
>
> Non volatile persistent memory devices are byte addressable like DRAM (for 
> eg. Intel DCPMM). Bucket cache implementation can take advantage of this new 
> memory type and can make use of the existing offheap data structures to serve 
> data directly from this memory area without having to bring the data to 
> onheap.
> The patch is a new IOEngine implementation that works with the persistent 
> memory.
> Note : Here we don't make use of the persistence nature of the device and 
> just make use of the big memory it provides.
> Performance numbers to follow. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to