[ 
https://issues.apache.org/jira/browse/HBASE-20003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370326#comment-16370326
 ] 

Andrew Purtell edited comment on HBASE-20003 at 2/20/18 5:38 PM:
-----------------------------------------------------------------

Cool, looking forward to the doc. 

Would the region replica placement be rack aware? Going to need that if 
aligning HBase level data availability precisely with HDFS.

In the event HDFS notices a block replica is missing, replication factor for a 
block is found to be too low, it will choose another live DN and make a new 
copy of the block there. We don't do that with region replicas but could.

Also, how would we account for the fact that HDFS block replication level can 
be increased for the WAL from 3 to a higher number for larger clusters? If I'm 
running a 100 node cluster, 3 HDFS replicas for the WAL is probably good 
enough. At 1000, I might want that to be 5. I suppose if we provided the same 
capability to increase the number of region replicas that would be covered too.


was (Author: apurtell):
Cool, looking forward to the doc. 

Would the region replica placement be rack aware? Going to need that if 
aligning HBase level data availability precisely with HDFS.

Also, how would we account for the fact that HDFS block replication level can 
be increased for the WAL from 3 to a higher number for larger clusters? If I'm 
running a 100 node cluster, 3 HDFS replicas for the WAL is probably good 
enough. At 1000, I might want that to be 5. 

I suppose if we provided the same capability to increase the number of region 
replicas that would be covered too.

> WALLess HBase on Persistent Memory
> ----------------------------------
>
>                 Key: HBASE-20003
>                 URL: https://issues.apache.org/jira/browse/HBASE-20003
>             Project: HBase
>          Issue Type: New Feature
>            Reporter: Anoop Sam John
>            Assignee: Anoop Sam John
>            Priority: Major
>
> This JIRA aims to make use of persistent memory (pmem) technologies in HBase. 
> One such usage is to make the Memstore to reside on pmem. Making a persistent 
> memstore would remove the need for WAL and paves way for a WALLess HBase. 
> The existing region replica feature could be used here and ensure the data 
> written to memstores are synchronously replicated to the replicas and ensure 
> strong consistency of the data. (pipeline model)
> Advantages :
> -Data Availability : Since the data across replicas are consistent 
> (synchronously written) our data is always 100 % available.
> -Lower MTTR : It becomes easier/faster to switch over to the replicas on a 
> primary region failure as there is no WAL replay involved. Building the 
> memstore map data also is much faster than reading the WAL and replaying the 
> WAL.
> -Possibility of bigger memstores : These pmems are designed to have more 
> memory than DRAMs so it would also enable us to have bigger sized memstores 
> which leads to lesser flushes/compaction IO. 
> -Removes the dependency of HDFS on the write path
> Initial PoC has been designed and developed. Testing is underway and we would 
> publish the PoC results along with the design doc sooner. The PoC doc will 
> talk about the design decisions, the libraries considered to work with these 
> pmem devices, pros and cons of those libraries and the performance results.
> Note : Next gen memory technologies using 3DXPoint gives persistent memory 
> feature. Such memory DIMMs are soon to appear in the market. The PoC is done 
> around Intel's ApachePass (AEP)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to