Thanks Ian. I DID miss the point. The person who started the chain is a 
different person :)

- Sri



>________________________________
> From: Ian Varley <[email protected]>
>To: "[email protected]" <[email protected]> 
>Sent: Saturday, 8 December 2012, 1:21
>Subject: Re: PROD/DR - Replication
> 
>Yes, I think so. A single HBase cluster can't (or, at least, really shouldn't) 
>span multiple data centers; the strong consistency you refer to is only 
>available within a cluster. 
>
>But the replication you were referring to in your initial email is cross-data 
>center, between two or more clusters. That's where you can't get strong 
>consistency. 
>
>Ian
>
>
>
>On Dec 7, 2012, at 1:38 PM, "sriraam h" <[email protected]> wrote:
>
>> "Strongly consistent reads/writes: HBase is not an "eventually consistent" 
>> DataStore. This makes it very suitable for tasks such as high-speed counter 
>> aggregation"
>> 
>> http://hbase.apache.org/book/architecture.html
>> 
>> 
>> Am I missing something ?
>> 
>> - Sri
>> 
>> 
>> 
>>> ________________________________
>>> From: Ian Varley <[email protected]>
>>> To: "[email protected]" <[email protected]> 
>>> Sent: Friday, 7 December 2012, 23:49
>>> Subject: Re: PROD/DR - Replication
>>> 
>>> Juan,
>>> 
>>> No; that would mean every single write to HBase has to wait for an ACK from 
>>> a remote data center, which would decrease your cluster throughput 
>>> dramatically. If you need that, consider other database solutions.
>>> 
>>> Ian
>>> 
>>> On Dec 7, 2012, at 12:14 PM, Juan P. wrote:
>>> 
>>> I was reading up on HBase Replication and wanted to make sure I'm not
>>> missing something.
>>> 
>>> Given that replication happens asynchronously the replication strategy has
>>> an "eventually consistent" policy.
>>> 
>>> I was considering using this feature for Production / Disaster Recovery
>>> setup.
>>> 
>>> Is there a way to enforce Consistency so that if my PROD environment should
>>> ever go down, I can 100% sure that DR will be completely up to date?
>>> 
>>> Thank you,
>>> Juan
>>> 
>>> 
>>> 
>
>
>

Reply via email to